Apple Enters the AI Arena & Reclaims Its Throne As Most Valuable Company

WWDC 2024 was a historic event for Apple. With the introduction of Siri 2.0, Private Cloud, and a confirmed deal with OpenAI, Apple overtook Microsoft in market capitalization.

You're receiving this email because you registered to one of our workshops. You can unsubscribe at the bottom of each email at any time.

MindStudio is officially live with our new multi-modal blocks and a few UI updates. Check out the recording of our presentation + demo webinar here.

This week, WWDC 2024 took the world by storm with a plethora of AI updates to iOS, iPadOS and MacOS Sequoia. Following the event, Apple reclaimed its spot as most valuable company in the world - at least for now. Luma released its AI Video Generation model in preview, Bubble.io announced an AI-powered web design builder for no-code apps, Mistral AI raised €600M to continue develop open source AI models, and Microsoft retired Custom GPTs from Copilot.

Resources for Pros

What’s coming next

Voice controls for text-to-speech models

New voice models from Eleven Labs

New image models from Stable Diffusion and Dream AI

Hybrid webinars, on-demand webinars, and 1:1 training sessions

We’re looking into live site crawl blocks to let you chat with websites

As a reminder, we’re now welcoming partners that want to build AIs for their clients. Sign up for extra support, training resources, and more here.

🗞️ Industry news

  1. Apple releases “Apple Intelligence” (AI), coming to Siri, internal tools, and third-party apps



    Apple WWDC 2024 was a historic event for Apple. Along with a few quality of life improvements to iOS like tinted home screens and free drag-n-drop for icons, Apple announced its play for AI.

    Copying Alibaba’s Jack Ma who called AI “Alibaba Intelligence” in an interview with Elon Musk, Apple decided to call its AI plans “Apple Intelligence”. The company historically avoided the term AI and preferred to focus on the benefits it brings rather than the technology itself, like AfiB monitoring on the Apple Watch, fall detection, etc. which are all powered by machine learning.

    Here are the most important AI updates coming to iOS 18, iPadOS 18, and MacOS Sequoia:

    • Most models will run on-device and the Apple Intelligence features will require an iPhone 15 Pro or above. Apple won’t store your data;

    • Private Cloud lets people use more powerful AI models that can't run on-device with privacy and security from Apple, with 3rd party audited infrastructure;

    • Siri 2.0 is a complete overhaul of Siri that goes root-deep into the ecosystem. Siri will be able to take actions for you (and third-party apps can program these actions with a new API), chat about any current topic, overlay your screen to take actions on what you're seeing, summarize, write, and more;

    • A partnership with OpenAI is confirmed, Siri 2.0 will get access to ChatGPT and they'll work in tandem. Apple says they want to include more providers soon. According to a few sources, Apple is NOT paying OpenAI, but it’s providing distribution. Apparently, paying in visibility works if you’re Apple;

    • Image editing, image generation, text rewrite, and all other base features Google Pixels and Samsung phones got;

    • Genmoji to generate custom emojis with AI on-device;

    • All features will be available for free. No subscription in sight, but Siri 2.0 can sync with your ChatGPT plus account.


    Plus, a slew of more features for all apps. While some of these features were expected, Apple still shocked the market with the breadth of announcements. The market reacted very positively and rewarded Apple with the top spot in market capitalization, a few months after Microsoft took over at the beginning of 2024.

    Following WWDC 2024, there was a significant amount of outrage, spearheaded by Elon Musk. He claims that all his companies will ban Apple devices if OpenAI models are implemented at the OS level. It’s worth noting that nothing in the keynote suggests this is the case. Apple is simply providing a wrapper for OpenAI models that obscures your IP and only sends data for which you explicitly give consent. It seems like a non-problem, and Apple is taking more steps than any other major firm when it comes to security and AI with its Private Cloud infrastructure based on Apple Silicon and transparent third-party auditing.

    For our international users, most Apple Intelligence features might only be available in US english for now. However, the features don’t appear to be region locked.

  2. Microsoft retires Custom GPTs for Copilot

    Microsoft Copilot, web version (screenshot)



    After only 4 months, Microsoft decided to discontinue Custom GPTs in Microsoft Copilot. This comes after ChatGPT decided to open up access to GPTs to all free users.

    Custom GPTs were introduced in November 2023 as a revolutionary idea to build with AIs, but since then have seriously struggled to get any sort of market share.

    While internal teams and individuals do seem to use Custom GPTs instead of prompt libraries, that doesn’t translate to the marketplace business OpenAI and Microsoft hoped to build.

    Microsoft is shifting to enterprise-first sales with their Copilot Studio product. OpenAI is likely going to try harder now that it’s free for everyone, but creating new GPTs is still reserved to premium users.

  3. Luma AI releases first high-level AI video generation model to the public, beating Google and OpenAI in timing


    Luma AI released Dream Machine to the public, its first text to video model that generates photorealistic outputs that rival Sora.

    I gave it a go, and while it’s true that Sora is still significantly better, this is an awesome model compared to other options like Stable Diffusion Video.

    Generative video is still a somewhat underperforming niche in AI, and Dream Machine is first high-quality model released to the world without waitlists or random promises a-la-Google.

    You can try out the new text to video model here for free. New videos can take several minutes to generate and are only a few seconds in length, but the trajectory seems positive.

    It’s worth noting there’s another high-quality model out now, Kling AI, but it’s targeting the Chinese market.

In other news, Apple overtook Microsoft in market cap, reaching $3.3t.

Apple reached a market cap near $3.3t

Mistral AI raised €600M to continue researching open source models, although the company is struggling to keep up with the Llama 3 family open source family from Meta. Recently, they made it easier to fine tune their models in “la Plateforme”, the alternative to OpenAI’s console.

On Thursday 13th June, Bubble.io went live with a few updates to their core no-code app builder. Now, users can generate full landing pages, including dynamic components and responsive designs, with a couple of prompts. The feature is similar to FlutterFlow, a big competitor that shipped AI gen earlier this year.

🔥 Product Updates

MindStudio turned multimodal this week. Users now have access to new blocks:

  • Generate image: Use DALL-E 2 or DALL-E 3 to generate images that stun your audience, without needing your own API key.

  • Display text: Use this block to display values stored in variables. This block will replace the current send message block when the sender is set to “system.”

  • Generate text: Replaces the current “Send Message” block.

  • Text to speech: Use audio models to generate AI outputs that speak to you.

  • Analyze image: Tap into the power of vision models like GPT-4 Vision to read images and add the result to your context.

This also means new output components, including audio and video playback components. You can combine all outputs together to build beautiful outputs like an article with image, title, audio narration, and more.

Coming soon:

  • Voice controls for text-to-speech models. Right now, all models are loaded with the default voice and settings. We want to give you access to controls to customize these.

  • More intuitive UI to manipulate multiple output types. Right now, our “?” icon for the block “Display Text” will provide more information on what you can add within its borders (more below);

  • Refinements to the Chat terminator to work with the new multi-modal capabilities;

  • … and Groq! 🔥 The company is finally ready to ship its Enterprise plan. We will be among their first customers, and we’re in touch with them to bring Groq to MindStudio very soon. Groq models currently run at speeds up to 1200 tks/s, meaning you can have AI outputs in the same time it takes to reload a web page.

💡 Tip of The Week

The new models have significantly changed a core functionality in MindStudio. Instead of a single "Send Message" block, MindStudio now has two separate blocks: "Generate Message" and "Display Text."

You read that right… the “Send Message” block is no more. But worry not, all your existing blocks have been converted to the new ones. You don’t need to take any action on past workflows, but you need to understand how to use these for new workflows.

In the past, the “Send Message” block had two drop downs:

  • Sender: either "user" or "system." If you selected "user," you would actually communicate with the AI model and get an output. If you selected "system," MindStudio would show whatever was in the prompt box without sending anything to any AI. It was a way to display the content of multiple variables at once or to compose a final output to show after a series of generations;

  • Response Behavior: either "Display to User," which streams the response to the user live and saves it in the context, or "Assign to Variable," which saves the response in a variable for later use and doesn’t automatically include the output in the context..

The “sender” option was the key differentiator, as “user” sent a message to the AI while “system” simply served as a display feature. Starting now, you’ll have to use:

  • Generate text blocks whenever you want to send a prompt to the AI, receive a result, and either show it to the user or save it as variable (previously “Send Message” block set to sender → system);

  • Display text blocks whenever you want to show the content of multiple variables, a static text, or compose a message that includes multiple images, text outputs, or even audio components.

🤝 Community Events

If you want to hangout with our team, we usually host a Discord event every Friday @ 3PM Eastern. Join our Discord channel to keep up to date with the hangouts - our entire team is active there.

You can register for upcoming events on our brand new events page here.

Additionally, our previous live workshops are now hybrid webinars. This means:

  • The video is pre-recorded. While this might seem counter-intuitive, I promise this is actually a better experience for you. It respects your time by delivering valuable information efficiently, ensures a smoother presentation flow, and allows you to watch at your convenience, regardless of your timezone;

  • You can join most webinar sessions whenever you want. All sessions include lots of live interactions, and you get to chat with our team that monitors all sessions remotely;

  • Hybrid webinars are NOT the same as YouTube videos, and some give you the option to book 1:1 training sessions with me. Give them a go - you won’t regret it;

  • Three hybrid webinars are up already: how to choose a model (updated recently), data sources, and how to build an AI agency.

Find all upcoming hybrid webinars on our events page. Look for the “on-demand” tag.

Why is this good for your learning? 🤔

Hybrid webinars let me spend more time making new content instead of repeating the same live session over and over again. You can still reach me on Discord and during every session (we see all chat messages on Slack and keep an eye on them). This way, I can create more useful content for different needs, and you can join and learn whenever you want.

The quality of the content is also much higher. I’m not a native speaker, and some of the example builds we need to showcase features actually take quite a bit of time. The recording allows me to spend as much time as needed to fine-tune all details before adding interactions.

Long story short: I think you’ll love these. If you don’t, though, please let me know. There will be plenty of opportunities to share feedback during the session.

Thank you for being an invaluable member of our community, it’s always great to see many of you join multiple workshops 🔥

🌯 That’s a wrap!

Stay tuned to learn more about what’s next and get tips & tricks for your MindStudio build.

You saw it here first,

Giorgio Barilla
MindStudio Developer & Project Manager @ MindStudio

How did you like this issue?

Login or Subscribe to participate in polls.