C910

C910 utilizes a 12-stage superscalar pipeline, is compatible with RISC-V architecture, and is enhanced for arithmetic operations, memory access and multi-core synchronization. It also has a standard memory management unit and can run operation systems such as Linux. Utilizing a 3-issue and 8-execution out-of-order execution architecture, it can be equipped with a single/double-precision floating point engine. It can be further equipped with a vector computing engine for AI acceleration, making it suitable for application fields requiring high-performance, such as 5G and artificial intelligence.

Architecture Features

  • Instruction set: RISC-V RV64GC/RV 64GCV

  • Multi-core: Isomorphic multi-core with 1 to 4 optional clusters. Each cluster can have 1 to 4 optional cores

  • Pipeline: 12-stage

  • Microarchitecture: Tri-issue (superscalar), out-of-order

  • General register: 32 64-bit GPRs, 32 64-bit FGPRs, and 32 128-bit VGPRs

  • Cache: 2-stage cache; I-cache: 32 KB/64 KB (size options); D-cache: 32 KB/64 KB (size options); L2 Cache: 128KB~8MB (size options)

  • Cache check: Optional ECC check and parity check

  • Bus interface: 1 128-bit master interface

  • Memory protection: On-chip memory management unit supports hardware backfilling

  • Floating point engine: Supports single and double precision floating point operations

  • AI vector calculation engine: Dual 128-bit operation width, supporting half-/single-/double-precision/8-bit/16-bit/32-bit/64-bit parallel computing

  • Multi-core consistency: Quad-core shared L2-cache, supporting cache data consistency

  • Interrupt controller: Supports a multi-core shared interrupt controller

  • Debugging: Supports multi-core collaborative debugging

  • Performance monitoring: Supports a hardware performance monitoring unit

Featured Technology

  • AI vector acceleration engine: Provides dedicated vector operation instructions to accelerate various typical neural networks

  • Multi-cluster scaling: Provides up to 16 cores to further improve computing performance

  • Hybrid branch processing: Hybrid branch processing technology including branch direction, branch address, function return address and indirect jump address prediction to improve the fetching efficiency

  • Data prefetching: Multi-channel and multi-mode data prefetching technology greatly improves data access bandwidth

  • Fast memory loading: Load memory access data in advance, and reduce the load-to-use latency

  • Storage speculation access prediction: Predicts random memory out-of-order speculation access, and improves execution efficiency

Architecture Diagram

Industrial Applications