What’s New in Mobile Processors This Year

Photo of author

By admin

Qualcomm says that its new high-end smartphone processor, the Snapdragon, represents a “quantum leap” instead of just an update. The business showed off the all-in-one CPU (SoC for system on a chip) during its annual conference in Hawaii. By 2025, most manufacturers would probably use this chip in their flagship device. Its other high-end Dimensity 9000 chips, like the new 9400, are getting stronger and ready to challenge Qualcomm—part of What’s New in Mobile Processors This Year.

Qualcomm had to come up with a SoC that would make a difference in the highly competitive smartphone market, which was not easy. The American corporation played its wild card, like in poker, to do this by putting all the best technology on the table. Qualcomm’s high-end Dimensity 9000 chip and its many versions, like the new Dimensity 9400, are making it harder for Qualcomm to stay on top of the high-end market. So, Qualcomm had to come up with a SoC that would really make a difference in the crowded smartphone industry.

Revamped CPU and GPU architectures

The American corporation employed everything it had, including its greatest technologies, to get this done, just like in poker.
A heart that wears the jersey of a real-life miracle child since it makes a big difference in how well it works, especially in video games (up to 50%). The CPU isn’t the only chip in this power duel; all of the bricks exhibit performance boosts of more than 40% over the last edition! It’s clear that improvements in architecture have an effect on these gains, and it’s also critical to remember how significant the manufacturing process is.

Qualcomm is following in the footsteps of Apple and MediaTek, its main competitor, with its Dimensity 9400, which was launched last month and is made using 3 nm technology. This mobile chip’s CPU (processing) part is a real change, not just a well-placed copy, in both its design and how it works. So, the initial combination of the Oryon architecture is the big news architecture.

Race for power at all levels

It is a big change for the organization. Qualcomm is also getting rid of its low-power cores, much like MediaTek did. The Snapdragon 8 Gen 3 featured three of them last year. Qualcomm says that its central administration makes it feasible to fully leverage the removal of these cores. It also contains six high-performance cores that can run at speeds of up to 3.53 GHz and two very high-performance Oryon cores that can run at speeds of up to 4.32 GHz.

From an organizational point of view, it is a big problem. Qualcomm is getting rid of low-power cores, exactly like MediaTek did with the Snapdragon 8 Gen 3 last year, which had three of them. Qualcomm said that its core management might be able to get rid of these cores altogether. The Snapdragon 8 Elite, on the other hand, has six high-performance cores (up to 3.53 GHz) and two ultrahigh-performance Oryon cores (up to 4.32 GHz).

The promises of local AI execution

Nanite is also the first mobile technology to enable Unreal Engine 5. It is also the first mobile device to use Unreal Engine 5’s Nanite technology, which is already available for PCs and consoles. Also, and more significantly, to how they are all connected. When Qualcomm was in the performance competition, it had two main goals: first, gaming, like the iPhone 14 Pro and iPhone 14 Pro Max, and second, and most crucially, artificial intelligence.

Neural processors were the last type of semiconductor to be added to smartphone chips, well before PC chips. But the NPU isn’t the only place where math is done. The best processor for a job depends on what it is. Qualcomm’s idea to make the Snapdragon 8 Elite a beast in all these areas was based on the fact that it could run small LLMs locally. These lightweight GPTs, Llama, Mistral, and others are normally trained with a lot less parameters than GPT and other fully trained LangLang-LaMs, which contain hundreds of billions of parameters. But they can already understand and make words, sounds, and pictures.

Conclusion

The fact that these small LLMs will run locally on the different SoC blocks will be very helpful for both the user and the big participants in the AI game. We will be able to respond to requests for text or image output more quickly, which is a good thing for both parties. The request doesn’t really need to be translated, compressed, and transmitted to the cloud, where it would then wait in processing queues to get the result. Also, keeping the request on the terminal gives you greater privacy. The titans of the field will never view or use your writings, sounds, photographs, or movies that AI can process.

The faster AI use is going, the more energy it costs, because the big players in the cloud AI scene have already used (stolen?) user data more than enough. If the most powerful and useful LLMs stay on the cloud, offloading them to modest local LLMs could save a lot of energy expenditures. In general, the news of such a big improvement in performance makes people wonder about appointments that take too long. Qualcomm is, however, combining power increases with certain incentives to save energy.

Leave a Comment