At the turn of spring and summer, it is the season for the world's top technology companies to hold annual activities. Two weeks ago it was Google, and this week it was Microsoft's turn. In just 43 minutes, Satya NADELLA, chairman and CEO of Microsoft, completed the keynote speech of this year's build 2022 conference. Like Allegro, he released more than 50 new products, technologies and project progress of Microsoft in ten fields, including developer process, cloud computing, microservices, commercialization of AI model, low code and industrial metauniverse.
The most noteworthy new releases include:
Using openai technology to successfully realize the commercial large model service openai services;
The low code / no code development platform driven by machine learning algorithm and GitHub copilot AI code assistant are officially open to the public;
The attempt of industrial meta universe;
For windows Project Volterra, a development prototype launched by on ARM developers;
The new live share real-time collaboration "applet" platform launched by office collaboration software teams; Wait.
Among the numerous new releases, the vast majority can be attributed to the following three key perspectives:
Through the assistance of AI and AI tools, developers and business users can be free from the drag of tools, such as GitHub copilot code generation tool, cross device synchronous development environment virtual machine dev box, etc.);
Reduce the development barriers between various computing platforms (chip architecture, operating system, etc.), and provide developers with full "integrated AI application" development capabilities, such as arm architecture development and testing prototype project Volterra, azure cloud computing machine learning services, etc;
Use AI driven automation technology to further improve office efficiency, such as low code web app generation tool power pages, customer service consumer dialogue summary generation tool text summary, etc.
From these perspectives, co-founder and former CEO Ballmer shouted "developer! Developer! Developer!" Microsoft is still sticking to its ambition. Only when developers continue to support and participate in the construction of Microsoft led enterprise services and cloud computing ecosystem, can the company continue to stand firm and avoid becoming the next HP / Yahoo.
Come and see what important developer tools Microsoft has released today.
Let the large model realize commercialization and become a sharp weapon for developers
When openai launched gpt-3 before, many third-party developers obtained testing permission and made many very creative demos.
However, no one is more "advanced" than Microsoft's permission. In gpt-3 and openai's many efforts in large-scale / super large-scale generation language and multimodal model, Microsoft sees great commercial prospects. In 2019, openai and Microsoft have reached a strategic partnership.
Of course, I'm afraid that only Microsoft, which is well versed in the office and enterprise markets, can truly and efficiently turn these technologies into an engine driving business growth.
At today's build conference, we saw the cooperation between Microsoft and openai finally bear fruit: Microsoft azure cloud computing platform officially launched openai services (preview version), and developers can apply to try this service. In a large number of different use scenarios, we can apply the code generation and language generation model developed by openai.
In a word, it is to enable all applications to get the blessing of AI big model.
Azure OpenAI Services
Take Carmax, an online used car sales platform, as an example:
When looking at hundreds of messages about second-hand cars, non professionals are always faced with a headache when they look at hundreds of messages about second-hand cars and technology. Carmax is using azure openai services to generate refined information about vehicles by using the powerful "reading and understanding" ability of gpt-3 and the enterprise service ability of azure cloud computing platform.
Now, Carmax users can understand the closest real vehicle situation through the "one paragraph" summarized by the model, such as how many people can be installed, space size and comfort, fuel economy, etc. Openai services effectively reduces the psychological pressure of Carmax users in the car purchase process, reduces transaction friction and improves the transaction probability.
Schematic diagram of Carmax using azure openai services dynamic diagram source: Microsoft
Another thing worth mentioning: express design, a low / no code development function. Through this technology, the design draft can be directly transformed into a working application interface or even a complete application in only a few seconds.
Express design is one of the functions of Microsoft's low code development tool power apps. It supports many types of files, including figma files, PDF, PPT and other document formats, and even hand drawn drafts. Behind it is also the language generation and multimodal model developed by Microsoft and its partners including openai.
Developers and users of power apps can start using the express design function today.
Express design can directly convert the design draft into a usable application prototype interface in just a few seconds. Source: Microsoft
As mentioned earlier, we did see many interesting and potential demos in the past, but the actual scenario application of large models has always been a difficult problem for R & D institutions.
Combine the super large model with the azure cloud computing platform to realize the large-scale automatic creation of code and content - this is also the meaning of the concept of "model as platforms" proposed by Microsoft this year.
It can be said that various openai demos two years ago showed us the potential of language generation models with large parameters. What Microsoft has done today is not only to hand over the large model to developers, but also to teach them how to further unlock the potential of the large model. The commercialization of large models is not simply finished by opening an API. After all, it is still a brand-new thing. It should be given fish and fishing.
Through azure openai services, express design and other technical services and functions, Microsoft has become the first person to eat a large model crab.
Video conference "sharing pictures" has become more high-tech
Since the outbreak, everyone should have been used to home / telecommuting and video conferencing. When explaining slides in a meeting, I believe many people often use the function of share screen.
However, in the view of Microsoft's office collaboration software teams team, many industry companies are working remotely now. They need to complete more complex collaboration tasks on a variety of software. Such needs cannot be met through the "passive" sharing screen function.
To this end, teams launched the live share feature this year.
Live share is not a replacement for sharing pictures, but it can be regarded as an "advanced version" of sharing pictures. Specifically, live share is to insert a third-party application interface into the video conference - a bit like remote desktop, which can not only watch, but also interact with all participants.
The 3D modeling software hexagon in the following figure is taken as an example: a live share screen can be launched in the teams video conference, and other participants can edit and observe the 3D model from their own perspectives. This more intuitive collaboration experience enables participants to participate in brainstorming more intuitively.
Team's new live share real-time collaboration function enables complex and interactive third-party programs to be embedded in meetings, greatly expanding the operability of sharing screens in video conferences. Source: Microsoft
Take the simplest scenario:
An application development team is demonstrating circular products. You want to see the effect of application menu click and icon drag. In the past, you had to raise your hand and tell the speaker to operate instead of you;
With live share, you can now operate on your own screen. Each participant can operate different operations at the same time without affecting the pictures seen by others. In this way, we not only have a deeper understanding of the presentation content, but also significantly shorten the time of the meeting.
(in the developer's language: Live share is like turning the content of the presentation into a virtualized instance. Each participant can get his own instance.)
The live share function is based on the front-end framework developed by Microsoft. Third party application developers only need to integrate Microsoft's newly launched live share SDK into the product to enable the application to support this function.
Functions can be realized through live share, including basic interface interaction, audio and video content synchronization and multi-user editing, as well as for planning Poker (Agile poker). Partners of this function include video collaboration service frame IO, design company hexagon, consulting company Accenture, plan poker service parabol and other companies.
It can be expected that with the live share function, more developers and intensive collaborative teams will prefer teams when choosing office collaboration and video conference solutions.
Open hardware ecosystem for new platform / cross platform developers
When discussing open source more than ten years ago, I'm afraid no one will take Microsoft as a positive case. However, since 2015, from open source Net to visual studio Supporting the development of multiple operating systems / languages, and then the strategic acquisition and technical support of GitHub, Microsoft has become one of the most active and important contributors in the field of open technology ecology.
As a vested interest of Wintel alliance, Microsoft is not shy about the decline of X86 computing platform. Over the past few years, Microsoft has proved itself in supporting multi platform development. At this build conference, the company took another key step in supporting an open hardware ecosystem.
Today, Microsoft launched project Volterra, an arm architecture, developer kit for developers:
"We believe that windows open hardware ecosystem can give developers more flexibility and choices and help them develop products that can support a variety of scenarios," Microsoft said. Project Volterra is such a product based on helping arm architecture developers.
Project Volterra adopts Xiaolong NPU computing platform and built-in neural computing unit (NPU) pushed by Qualcomm in recent years, which can realize reasoning and some training in machine learning model with low power consumption. This developer prototype runs windows on arm, which is suitable for developers who take windows or windows subsystem suitable for Linux as the main working environment.
Project Volterra integrates multiple I / O interfaces, and Microsoft claims that this development machine adopts stackable design, which seems to mean that multiple machines can be stacked to realize a workload mode similar to parallel computing:
In the last decade, Microsoft's own surface The device's attempts in windows on arm have not achieved praiseworthy results, and even some attempts have failed, such as surface RT in previous years and Neo / duo dual screen devices in recent two years.
In the era of the decline of X86 computing platform, arm architecture is still undoubtedly the most important computing platform in the consumer level and Internet of things market. Microsoft did not give up the arm market because of its failure. Like the big open source in previous years, this company is crazy about embracing the arm architecture today:
In addition to project Volterra, Microsoft also announced that it will support arm architecture end-to-end on the whole windows platform, and provide a series of arm native tool chains, including but not limited to visual studio / vscode, Visual C + +, net framework, etc.
Based on the strong support for arm / Xiaolong NPU computing platform, Microsoft is currently planning a grand plan: hybrid loop
Hybrid loop is a cross platform AI development mode. Its ultimate goal is to make any neural network model applicable to any application with the help of azure ml and onnx runtime (compatible with a variety of neural network frameworks), and deploy it to CPU, GPU, NPU, FPGA and other mainstream computing hardware platforms.
Project Volterra is an attempt at this great plan.
These are the new products and technologies that we think are particularly worth talking about at this year's build conference. As mentioned at the beginning of the article, this year's keynote speech released more than 50 new gadgets in 10 categories in 43 minutes. If you are interested in learning about all the projects, you can visit Microsoft's official website.
But finally, I would like to add that one of the features of this year's build developer conference is "not selling futures". Most of the more than 50 new releases are open to developers and the public to varying degrees after the meeting.
You can watch the keynote speech and video playback of the build conference on Microsoft's official website, as well as learn more technologies and product releases not mentioned in this article.
Wen / Du Chen