How to break two "scissors" accelerate AI landing? Xilinx gives the answer
업데이트 시간: 2019-12-20 12:08:48
"We will deepen the research and development of big data, artificial intelligence and other applications, and foster new industrial clusters such as next-generation information technology, high-end equipment, biomedicine, new energy vehicles and new materials to strengthen the digital economy." "build an industrial Internet platform, expand the" smart + ", and empower the transformation and upgrading of the manufacturing industry."...Artificial intelligence (AI) was written into the government work report for the third year in a row at the recently concluded two sessions of the Chinese people's political consultative conference, and for the first time it was derived into the concept of "intelligence +".As a national strategy, artificial intelligence will accelerate the integration with industry and play an important role in the optimization and upgrading of economic structure.
Ai was also an important theme at the 8th EEVIA annual Chinese ICT media BBS and 2019 industry and technology outlook seminar recently.Adaptive and intelligent computing the global leader in xilinx Liu Jingxiu market director of artificial intelligence in "the FPGA - artificial intelligence computing acceleration engine" the theme of the speech opening to made popular interpretation of the concept of "smart +" : "the essence of the AI is the high performance computing, like electric power is a general ability, is able to all industry plays a role of promoting industrial upgrading and product iteration exists."
The key methodology is FPGA to break the constraints of the two scissors
Liu jingxiu doesn't seem satisfied with the speed of ai's landing. He thinks the current era should be the era of "intelligent service" rather than the era of real ai.He considers the current projects such as human-computer voice dialogue and intelligent video application as the initial intelligent applications. "for example, human-computer dialogue can be used for the most basic life services, but it is difficult to have more than 20 sentences of real conversation, and the latter is basically ga chat.Compared with the industry and media attention, the real speed of the implementation of artificial intelligence in recent years is a little slow, liu jingxiu gave two key judgments that the scissors gap hinders the development.
The first is the "scissors difference" between the massive data and the processing power provided by the computing chip, which is mainly reflected in the fact that due to the limitation of Moore's law, the progress of the computing power of the traditional chip has been far behind the demand for computing power of the explosive growth of data.Secondly, there is the "scissors gap" between the market and technology development of chip development in the long cycle and rapid iteration. The complete process of traditional chip development usually lasts for 18 to 24 months, while the current AI projects often need to come up with solutions within a few months, so as to seize the market.Following the long process of developing chips in the past, market demand may have fundamentally changed when chips are shipped.
In addition is an indisputable fact that, at present the AI chips have been developed to need to adopt 28 even 16 nano manufacturing process, if the AI is force on process of iteration, all the needs of the funds needed for the investment and risk are usually small and medium-sized enterprises or innovative start-ups unbearable, and given the time window problem, almost no enterprise willing or have a power in this market."So a programmable and flexible FPGA is the best choice."" ai innovation enterprises can focus their core research and development resources on specific areas (algorithms and frameworks) and applications to create more value from these levels.Liu jingxiu pointed out.
ACAP makes AI "fly," and the first adaptive computing platform accelerates
In the 35 years since sering's invention of the FPGA, the programmable logic device has gained an irreplaceable place in the fields of communication, medical care, industrial control and security by virtue of its advantages in performance, time to market, cost, stability and long-term maintenance.However, it also needs to compete with traditional processors.But in recent years, thanks to the rise of cloud computing, high-performance computing and artificial intelligence, the FPGA, with its inherent advantages, is expected to begin its own era.
The strength and prospects of the FPGA in artificial intelligence, as well as the company's transformation over the years, have seen the stock nearly triple in the past three years, according to the company's earnings report.From pure FPGA to the integrated DSP, memory, to 28 nm integrated Arm and RFSoC the launch of the spirit, has always been in innovation technology driven application, clearly the spirit of forward-looking strategic layout is not satisfied with the harvest, the advantage of FPGA "margin" in the era of artificial intelligence, in particular, President and CEO Victor Peng is more frequent in the field of artificial intelligence, since he took office in data center is preferred, accelerating the growth of the mainstream market and drive the calculation of flexible three strategic clearer market layout,The launch of the new category of ACAP(adaptive computing acceleration platform) lays a key foundation for expanding the advantages of artificial intelligence industry.
As a highly integrated multi-core heterogeneous computing platform, ACAP is called by the media as a sharp tool of sering's "butterfly transformation" for next-point computing and a new species of sering's device family.To build ACAP, sering has invested thousands of engineers, five years of research and development, and more than $1 billion.Its core is a new generation of FPGA architecture, which can be flexibly modified from the hardware layer according to the requirements of various applications and workloads.The flexible strain capacity of ACAP can be dynamically adjusted in the working process, and its function will far exceed the limit of FPGA.On the BBS, Liu Jingxiu also done to the first product Versal ACAP explained: "as the name suggests, the Versal is equal to the various (a variety of different, all kinds of) + Universal (generic and everything), all can support various application developers."This is a fully software-based heterogeneous computing platform that combines scalar engines, adaptive engines, and intelligent engines to deliver significant performance improvements that are 20 times faster than today's fastest FPGA and 100 times faster than today's fastest CPU implementations.
Versal series products are based on the latest 7 nm TSMC FinFET process, is the first software programmability and specific areas of perfect combination of hardware acceleration and flexible ability platform.The platform's unique architecture provides unique scalability and AI inference capabilities for a wide range of applications in different markets, including cloud, network, wireless communication and even edge and end computing. It will usher in the latest and fastest era of innovation for all developers to develop new applications.The spirit, has released the Versal based core series, series and Versal AI Versal flagship series, AI Edge and Versal HBM series will be released in the future.
From the hardware platform to the algorithm model, the complete tool chain makes AI easy to land
The explosive heat of artificial intelligence and huge market prospect have injected "stimulant" into the global semiconductor market and are coveted by almost all semiconductor companies.At present, the market has been a variety of new processor product release."It's not hard to build the chip itself, but without enough high-performance software, an ecosystem, a toolchain, and a variety of reference applications, it's going to take much longer to get there."Liu jingxiu said.For serings, the rich combination of traditional FPGA chips and the innovative ACAP platform provide many choices for the landing of AI."As far as customer AI development is concerned, traditional solutions do not provide enough support. Sering provides customers with more different levels of support, in addition to the underlying hardware, various IP and software, as well as various neural network models at the application layer."Liu jingxiu pointed out.Serings has a very rich library of neural network models.There are more than 70 visual-related neural network models.With the rapid rise in the AI market, sering is transforming from a traditional chip provider to a platform solution provider.
Sering's overall solution in artificial intelligence/edge/embedded and cloud/data center.
According to liu jingxiu, after the acquisition of shenjian technology, shenjian technology's research and development team of more than 100 people continued to focus on the research and development of sering DNNDK(deep neural network development kit).DNNDK AI heterogeneous computing platform oriented deep learning processor DPU (deep learning processor unit), can support compression, compiling optimization neural network inference phase model and efficient different functional requirements of the runtime support, for the DPU platform for all kinds of deep learning application development and deployment provides a set of effective solution stack type, all from the deep learning algorithm to DPU hardware platform of efficient mapping, from mobile terminal to the data center for DPU unified application development kit and the programming interface.
DNNDK can not only greatly reduce the development threshold and deployment difficulty of DPU platform's deep learning application, but also significantly accelerate the process of AI product from development to market."Sering defines an efficient instruction set and IP at the bottom of the scheme. Combining the interface provided by the complete set of tools and SDK, the client does not even need to write a single line of code, but only needs to call up our IP resources to support the application of different scenarios in different industries.Liu jingxiu said.By creating a common processor platform and a comprehensive tool set, sering strives to provide customers with an excellent and efficient development experience.
The implementation of ai in specific application scenarios is a complex development process.Traditional processors typically take three to six months or even a year to develop."With our current solution, we can deploy the new network on the hardware in as few hours as possible and get the system up and running quickly."Liu jingxiu stressed.Speed is one of the most important considerations for current ai startups and partners. By quickly implementing prototypes, real scenario performance, functional iteration, and data collection can be achieved as early as possible, so that products can be brought to market faster than others.
Sering's overall solution in artificial intelligence/edge/embedded and cloud/data center.
According to liu jingxiu, after the acquisition of shenjian technology, shenjian technology's research and development team of more than 100 people continued to focus on the research and development of sering DNNDK(deep neural network development kit).DNNDK AI heterogeneous computing platform oriented deep learning processor DPU (deep learning processor unit), can support compression, compiling optimization neural network inference phase model and efficient different functional requirements of the runtime support, for the DPU platform for all kinds of deep learning application development and deployment provides a set of effective solution stack type, all from the deep learning algorithm to DPU hardware platform of efficient mapping, from mobile terminal to the data center for DPU unified application development kit and the programming interface.
DNNDK can not only greatly reduce the development threshold and deployment difficulty of DPU platform's deep learning application, but also significantly accelerate the process of AI product from development to market."Sering defines an efficient instruction set and IP at the bottom of the scheme. Combining the interface provided by the complete set of tools and SDK, the client does not even need to write a single line of code, but only needs to call up our IP resources to support the application of different scenarios in different industries.Liu jingxiu said.By creating a common processor platform and a comprehensive tool set, sering strives to provide customers with an excellent and efficient development experience.
The implementation of ai in specific application scenarios is a complex development process.Traditional processors typically take three to six months or even a year to develop."With our current solution, we can deploy the new network on the hardware in as few hours as possible and get the system up and running quickly."Liu jingxiu stressed.Speed is one of the most important considerations for current ai startups and partners. By quickly implementing prototypes, real scenario performance, functional iteration, and data collection can be achieved as early as possible, so that products can be brought to market faster than others.
이전: Everything about N-channel hexfet power mosfet-IRF3205
다음: Ams a number of innovative sensor technologies appear in MWC
Ratings and Reviews
특수 제품에 대한 관련
-
XCZU6CG-2FFVB1156I
Xilinx
Zynq UltraScale+ MPSoC: CG Device SOC CO > -
XCZU5CG-1FBVB900I
Xilinx
Zynq UltraScale+ MPSoC: CG Device SOC CO > -
XC6VHX380T-3FFG1923C
Xilinx
FPGA Virtex®-6 HXT Family 382464 Cells 4 > -
XC7Z010-1CLG400I
Xilinx
Zynq-7000 SOC CORTEX-A9 ARTIX-7 > -
XC7A200T-1FFG1156I
Xilinx
215360 Cells 28nm Technology 1V > -
XCV1600E-7BG560I
Xilinx
Virtex-E 1.8 V Field Programmable Gate A > -
XCS30-4PQ208C
Xilinx
Spartan and Spartan-XL Families Field Pr > -
XCS05-3PC84C
Xilinx
Spartan and Spartan-XL Families Field Pr > -
XCCACE-TQG144I
Xilinx
System ACE CF Controller > -
XC9572XL-10VQG44I
Xilinx
CPLD, FLASH, 72, 34 I/O's, VQFP, 44 Pins > -
XC9572-10PC44C
Xilinx
COMPLEX-FLASH PLD > -
XC9536XL-5VQG44C
Xilinx
CPLD XC9500XL Family 800 Gates 36 Macro > -
XC95216-20PQG160C
Xilinx
CONN H4 FEMALE 4mm² > -
XC95144XL-5CS144C
Xilinx
XC95144XL High Performance CPLD > -
XC95144XL-10TQG144I
Xilinx
CPLD XC9500XL Family 3.2K Gates 144 Macr >
가능 증권
더- XC95144XL-10TQ144C
- XC95144-10PQ100I
- XC95108-7PQ100I
- XC95108-7PQ100C
- XC95108-10TQ100I
- XC7K410T-2FFG676I
- XC7K410T-1FFG900C
- XC7K325T-2FFG676I
- XC7A200T-2FFG1156I
- XC6VSX315T-1FFG1759I
- XC6VLX365T-2FFG1156I
- XC6SLX9-3TQG144I
- XC6SLX9-2TQG144C
- XC6SLX9-2CPG196I
- XC6SLX16-2CSG324I
- XC6SLX150T-3FGG484C
- XC6SLX150-2FGG484C
- XC6SLX100T-3FGG676I
- XC5VLX155T-2FFG1136I
- XC4VSX55-10FFG1148I
- XC4VLX160-11FFG1148I
- XC4VLX15-10SFG363C
- XC4VFX40-10FFG672C
- XC4010-4PQ208C
- XC4005E-4PG156I
- XC3SD3400A-5FG676C
- XC3S5000-4FG900I
- XC3S400-4PQG208I
- XC3S200-4FT256C
- XC3090A-7PG175B
- XC3064-100PP132C
- XC3042-50PG84M
- XC3042-50PG84B
- XC3030A-7PG84C
- XC3030A-70PC84C
- XC3030-70PG84M
- XC3030-70PC84C
- XC2S30-5TQG144C
- XC2S200E-6PQ208I
- XC2S200-5FG256I
- XC18V512PCG20C
- XC17V16PC44C
- XC17S30PD8I
- XC17S20XLPD8C
- XC17S10XLVOG8C



모든 항목


