Live
- Tirupati: SP seeks public cooperation to check crimes
- State govt committed to protect Wakf lands: Shariff
- Tirupati: 8 held; 69 red sanders logs seized
- Govt Schemes: Take steps to achieve loan targets says Collector to pvt banks
- Consumer rights protection is a shared social responsibility
- High Court relief for KCR, Harish Rao
- Visakhapatnam: Class VIII boy allegedly commits suicide
- Ram Mohan calls for collective efforts to develop dist
- Convocation ceremony held at ICBM-SBE
- Visakhapatnam: Webinar on web services skills on Dec 28
Just In
Ex-OpenAI Founder Ilya Sutskever Launches New AI Venture, Safe Superintelligence
Ilya Sutskever's new AI company, Safe Superintelligence (SSI), focuses on creating safe superintelligence, potentially rivalling OpenAI.
Ilya Sutskever, a prominent figure in the AI world and former chief scientist at OpenAI has founded a new AI company named Safe Superintelligence (SSI). This move comes just a month after he left OpenAI, signalling a significant shift in his focus towards AI safety.
A Mission for Safe Superintelligence
Safe Superintelligence (SSI) was born from Sutskever's commitment to addressing what he calls the "most important technical problem of our time" – creating a safe superintelligence. According to SSI's official X account, the company's mission is to ensure that superintelligent AI is developed with a primary focus on safety.
Prioritizing Safety
Sutskever's departure from OpenAI was largely due to his growing concerns over the rapid pace at which AI technology was advancing under the leadership of CEO Sam Altman. Known for his outspoken stance, Sutskever even led efforts to remove Altman from the board of directors late last year. This internal conflict highlighted broader concerns about the potential dangers of unchecked AI development.
In response, Sutskever's new venture, Safe Superintelligence, aims to tackle these issues head-on. As stated on SSI's X account, the company's entire mission, name, and product roadmap revolve around the concept of safe superintelligence. "SSI is our mission, our name, and our entire product roadmap because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI," the company emphasized.
Superintelligence is within reach.
— SSI Inc. (@ssi) June 19, 2024
Building safe superintelligence (SSI) is the most important technical problem of our​​ time.
We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence…
SSI plans to approach the development of AI by balancing safety and capability advancements. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace," the company added.
The Impact on OpenAI
Sutskever's entry into the AI arena with SSI naturally sets up a potential rivalry with OpenAI. However, OpenAI holds a significant advantage with its years of experience and established products like GPT-4. Additionally, OpenAI's strategic partnerships with companies such as Apple position it well for continued growth and innovation.
Speaking to Bloomberg, Sutskever emphasized that SSI's primary focus will be achieving safe superintelligence, even if it means delaying the release of mass-market products. This cautious approach underscores his commitment to safety, suggesting that it may be some time before SSI brings any major products to market.
As the AI landscape continues to evolve, Sutskever's new venture could play a crucial role in shaping the future of safe AI development. Whether SSI will emerge as a serious competitor to OpenAI remains to be seen, but its focus on safety sets it apart in the rapidly advancing world of artificial intelligence.
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com