SolidityBench by IQ has launched as the first leaderboard to evaluate LLMs in Solidity code generation. Available on Hugging Face, it introduces two innovative benchmarks, NaïveJudge and HumanEval for Solidity, designed to assess and rank the proficiency of AI models in generating smart contract code.
Developed by IQ’s BrainDAO as part of its forthcoming IQ Code suite, SolidityBench serves to refine their own EVMind LLMs and compare them against generalist and community-created models. IQ Code aims to offer AI models tailored for generating and auditing smart contract code, addressing the growing need for secure and efficient blockchain applications.
As IQ told CryptoSlate, NaïveJudge offers a novel approach by tasking LLMs with implementing smart contracts based on detailed specifications derived from audited OpenZeppelin contracts. These contracts provide a gold standard for correctness and efficiency. The generated code is evaluated against a reference implementation using criteria such as functional completeness, adherence to Solidity best practices and security standards, and optimization efficiency.
The evaluation process leverages advanced LLMs, including different versions of OpenAI’s GPT-4 and Claude 3.5 Sonnet as impartial code reviewers. They assess the code based on rigorous criteria, including implementing all key functionalities, handling edge cases, error management, proper syntax usage, and overall code structure and maintainability.
Optimization considerations such as gas efficiency and storage management are also evaluated. Scores range from 0 to 100, providing a comprehensive assessment across functionality, security, and efficiency, mirroring the complexities of professional smart contract development.
Which AI models are best for solidity smart contract development?
Benchmarking results showed that OpenAI’s GPT-4o model achieved the highest overall score of 80.05, with a NaïveJudge score of 72.18 and HumanEval for Solidity pass rates of 80% at pass@1 and 92% at pass@3.
Interestingly, newer reasoning models like OpenAI’s o1-preview and o1-mini were beaten to the top spot, scoring 77.61 and 75.08, respectively. Models from Anthropic and XAI, including Claude 3.5 Sonnet and grok-2, demonstrated competitive performance with overall scores hovering around 74. Nvidia’s Llama-3.1-Nemotron-70B scored lowest in the top 10 at 52.54.
Per IQ, HumanEval for Solidity adapts OpenAI’s original HumanEval benchmark from Python to Solidity, encompassing 25 tasks of varying difficulty. Each task includes corresponding tests compatible with Hardhat, a popular Ethereum development environment, facilitating accurate compilation and testing of generated code. The evaluation metrics, pass@1 and pass@3, measure the model’s success on initial attempts and over multiple tries, offering insights into both precision and problem-solving capabilities.
Goals of utilizing AI models in smart contract development
By introducing these benchmarks, SolidityBench seeks to advance AI-assisted smart contract development. It encourages the creation of more sophisticated and reliable AI models while providing developers and researchers with valuable insights into AI’s current capabilities and limitations in Solidity development.
The benchmarking toolkit aims to advance IQ Code’s EVMind LLMs and also sets new standards for AI-assisted smart contract development across the blockchain ecosystem. The initiative hopes to address a critical need in the industry, where the demand for secure and efficient smart contracts continues to grow.
Developers, researchers, and AI enthusiasts are invited to explore and contribute to SolidityBench, which aims to drive the continuous refinement of AI models, promote best practices, and advance decentralized applications.
Visit the SolidityBench leaderboard on Hugging Face to learn more and begin benchmarking Solidity generation models.
The post OpenAI GPT 4o ranked as best AI model for writing Solidity smart contract code by IQ appeared first on CryptoSlate.