Apple is Looking to Join the AI Race + New Models Coming to MindStudio

Apple is looking to join the AI race with OpenAI and Google, MindStudio is going to add new models and metered DALL-E 3

Last week, MindStudio took time to develop a new infrastructure to add new language and image models into the platform.

In parallel, the AI industry didn’t move as fast as usual, with Apple being the main player. They released a few open source models (quite disappointing performance) and announced potential partnerships with OpenAI. Developers are playing around with Llama 3 and trying to expand its context window, with some achieving good results up to 96k.

Keep reading to learn more!

AIs of the Week

submit your AI in a reply to this email

New Guides for Pros

🗞️ Industry news

  1. Apple releases open source models that run on-device


    Apple has introduced a collection of small AI language models named OpenELM, designed to operate directly on devices like smartphones.

    These models, which vary in complexity from 270 million to 3 billion parameters, are available in both pre-trained and instruction-tuned versions.

    OpenELM is a proof-of-concept initiative that represents a move towards enabling more robust AI functionalities on consumer devices without relying on cloud-based computing.

    The models and associated training tools are released under the Apple Sample Code License on Hugging Face, with a focus on transparency and reproducibility to aid open research.

    So far, the performance metrics aren’t particularly impressive. We hope to see more from Apple in the coming months, especially after the recent announcement that they’re looking to work with OpenAI.

  2. Perplexity raises $62.7 million at Unicorn valuation
    The AI search engine has grown to serve 169M queries per month and more than 1 billion queries total in the last 15 months.

    Together with the funding announcement, Perplexity released its Enterprise offering as their first B2B package, adopted by companies like NVIDIA, Databricks, Zoom, and more.

    MindStudio is looking to implement Perplexity’s API in our product very soon, which will enable web-powered results with LLMs in the background. The Perplexity API gives access to a plethora of models, including Meta’s new Llama 3.

  3. Hume AI releases their API, bringing empathic voices to the AI game


    Hume AI’s API is now publicly available. Unlike conventional voice AI that relies solely on transcription and text-to-speech, EVI incorporates an empathic Large Language Model (eLLM) that analyzes the tone, rhythm, and timbre of a user's voice to generate responses that are not only contextually relevant but emotionally resonant.

    This API enables more human-like dialogues, where the AI can adapt its responses based on the emotional cues of the user. Additional customization options are available through the Configuration API, allowing for tailored prompts, integration with other language models, and control over the conversational dynamics.

    MindStudio is looking to implement more AI models, not only text models, in our platform. Eleven Labs and Hume AI are ones on our radar, but they’ve not been confirmed yet. Let us know what you think about text-to-speech and speech-to-speech models and if you’d benefit from a few in MindStudio!

🔥 Product Updates

This week has been on the calmer side, but it doesn’t mean we haven’t been working hard to bring you the latest and best innovations in AI.

MindStudio officially released its Agency offering, restructured its homepage, and added a new form for enterprise partnerships. If you’re an agency, you can sign up to join the program here.

The team is also working on:

  • Adding Image Generation Capabilities: you can already use DALL-E 3 in MindStudio with your own API key, but we want to make the process easier. Soon, you’ll be able to generate images from the editor just like you “send a message” block;

  • New models: there are a plethora of new, advanced models that were released in the past 10d. MindStudio will include the most relevant ones like Gemini 1.5 Pro, GPT-4 Turbo (new), and Llama 3;

  • Certifications: our product team is working to get SOC2 compliance in the coming weeks. After that, we’ll look into HIPAA and GDPR to build up our Enterprise offering. We know that certifications are crucial for enterprise team looking to handle PII or sensitive data, and we want to offer the best platform for your needs.

  • New learn page: we want our education centre to be easy to understand, comprehensive, and… good looking! We’re working on a redesign of the learn page and all the content within it.

💡 Tip of The Week

The “resolved” message in the debugger is a treasure trove of information to help you debug, improve your prompt, and save tokens.

In the debugger, each “programmatic” and “background” message includes two values:

Sending Message: this is your prompt as is, with the variables in curly brackets and the instructions;

Resolved Message: this is your actual prompt, aka what’s being sent to the AI. This matches the sending message but executes a function to retrieve the values in the variables, so it will expose what’s inside them.

This is useful for many reasons.

First of all, it lets you troubleshoot data sources. If you’re referencing a variable containing snippets from a query, they should appear in the resolved message. If they don’t, the query data source failed. If they do, but they’re wrong, you might want to change your prompt.

Secondly, it lets you save prompt tokens. Now that you know what an example final message looks like, you can copy paste the resolved message and test it with the model without running the whole workflow again. If you want to learn more on cost saving techniques, take a look at our 20m masterclass here.

The debugger is an advanced feature, but you shouldn’t be intimidated by it! It’s actually quite easy to grasp. Here’s a full tutorial.

🤝 Community Events

You can now register for all our upcoming workshops on our redesigned event home page. You can find it here.

Here are the three closest workshops from today:

MindStudio Live Build: Learn How to Build a Content Generator, Apr 30th
Register here

How to use Custom AI Apps to Automate Your Role, May 7th
Register here

MindStudio Live Build: Learn How to Build a Meeting Summarizer
Register here

We heard your feedback and decided to focus on live builds and features currently tricky to learn and/or with a steep learning curve like RAG.

As a general reminder, our Discord group is an invaluable source of information, news, and more. The entire MindStudio team is active on the platform.

We host these hangouts every Friday, approximately at the same time unless something happens :)

🌯 That’s a wrap!

Stay tuned to learn more about what’s next and get tips & tricks for your MindStudio build.

You saw it here first,

Giorgio Barilla
MindStudio Developer & Project Manager @ MindStudio

How did you like this issue?

Login or Subscribe to participate in polls.