500M Years of Evolution in 98B Parameters, JEPA: LeCun's Path to Human-Level AI

Plus, free AWS cloud credits inside this email

Before we start, share this week's news with a friend or a colleague:

Key Takeaways

  • OpenAI developed CriticGPT, which helps identify errors in GPT-4 outputs to help improve accuracy and reduce hallucinations.

  • Google released Gemini Pro 1.5 with a context window of 2 million, and Gemma 2, which provides comparable performance to larger models like Llama-3. Does it mean you don't need RAG? See what DeepMind thinks on this topic below.

  • Meta’s LLM Compiler is a suite of pre-trained models for code optimization that outperform GPT-4 Turbo at compiler optimization tasks.

  • ESM3 is a 98B parameter model that can generate novel protein structures, simulating evolutionary divergence over 500 million years.

  • Agent symbolic learning overcomes the limitations of traditional language agents and presents a means of getting closer to achieving AGI.

Got forwarded this newsletter? Subscribe below👇

The Talk of the Day

WhatsApp is previewing the Llama 3-405B model for complex prompts, available for a limited number of prompts weekly. After reaching the limit, users will switch to the default model, according to multiple reports on social media.

Menu for Google Play Beta Program users (version 2.24.14.7) lets users pick an LLM.

Free Credits from the Intel Disruptor Initiative, AWS, and Activeloop

We're celebrating one year from the kick-off of our GenAI360: Foundation Model Certification launched in collaboration with Intel and Towards AI. Nearly 40,000 course takers and thousands of GenAI360-certified professionals later, we're announcing a scholarship program for free AWS Cloud Credits. These scholarships will enable you to work on hands-on examples using Intel® Xeon® Scalable Processors in the AWS cloud environment.

As a member of our community, you're welcome to apply regardless of your seniority level!

The Latest AI News

OpenAI follows in the footsteps of Anthropic by developing a model to find flaws in GPT-4’s outputs. Google was busy releasing longer contexts for models like Gemini 1.5 Pro and Gemma 2, while competition continues to heat up in the AI chip space with another startup entering the market - but with a unique twist.

Subscribe to keep reading

This content is free, but you must be subscribed to GenAI360 - Weekly AI News to continue reading.

Already a subscriber?Sign In.Not now