Image: Apple

Decentralized platform io.net becomes the first cloud service provider to support Apple’s new silicon chips. 

In an announcement this Wednesday, io.net, a decentralized cloud service provider that deploys and manages on-demand and decentralized GPPU and CPU clusters on IO Network, announced the support of Apple’s new Silicon M-series chips into its network. This sets up the platform as the pioneer cloud service provider to support Apple’s chips in a bid to extend development in machine learning and artificial intelligence across the Web 3 space. 

Speaking on the latest expansion, io.net founder, Ahmad Shadid showed his excitement at the move terming it a “massive step forward” for io.net’s ambitions in democratizing access to powerful computing resources and opening their platform to a new market. 

“We are thrilled to be the first cloud service provider to support Apple chips for machine learning,” he stated. “This is a massive step forward in democratizing access to powerful computing resources, and paves the way for millions of Apple users to earn rewards for contributing to the AI revolution.”

Via the proliferation of Apple’s massive market base, ML/ AI engineers will be able to deploy thousands of Apple chips in a single cluster within seconds. This will fuel io.net’s mission to grow the AI and ML technology base, providing engineers with a low-cost and more accessible GPU and CPU power from millions of Apple users. 

On the flip side, Apple’s powerful M-series silicon chips will provide CPU, GPU and Neural Engine abilities in a single chip, resulting in lower latency and higher data processing efficiency suitable for ML and AI applications. By supporting the wide range of Apple’s silicon chips, including the M1, M1 Max, M1 Pro, M1 Ultra; M2, M2 Max, M2 Pro, M2 Ultra; and M3, M3 Max, M3 Pro, M3 Ultra (coming soon), Apple users will have the option to contribute their unutilized compute resources and in return earn rewards. 

Apple M3 chip line opens boundaries to new heights 

The introduction of Apple silicon chips will be a game changer for io.net’s users, especially with the upcoming launch of the M3 chip. This new-age chip memory architecture, with support for up to 128GB, currently surpasses the capabilities of even NVIDIA-A100-80GB graphics cards. Additionally, the chip benefits from an enhanced Neural Engine that is up to 60% faster than the M1 chips and a unified memory architecture that makes them suitable for model inferencing.

The M3 Max line further consists of over 90 billion transistors and a 40-core GPU, which makes it a top-of-the-line chip when it comes to ML and AI models. Users can offer a larger unutilized capacity from their chips to earn bigger rewards and enable faster calculations and more efficient executions across the io.net platform. 

Via a few clicks, Apple users will now be able to join one of the largest and fastest-growing decentralized cloud services, onboarding their devices to the network and earning rewards. IO Network, on its part, will further reach a larger audience, offering more computing capacity as engineers continue building the Internet of GPUs to solve the computing shortage driven by the continued boom in AI and machine learning.

Disclaimer: This article is provided for informational purposes only. It is not offered or intended to be used as legal, tax, investment, financial, or other advice.