Keynote AI: How to Use AI Features in Apple Keynote Presentations

Matthew Diakonov··11 min read

Keynote AI: How to Use AI Features in Apple Keynote

Apple Keynote is the default presentation tool for macOS and iOS, but it has lagged behind Google Slides and PowerPoint when it comes to built-in AI features. If you have searched for "keynote ai" you are probably trying to figure out what AI can actually do inside Keynote today, what Apple Intelligence brings to the table, and whether there are better options for automating your slide workflow. This guide covers all three.

What Apple Intelligence Adds to Keynote

Starting with macOS 15.1 and iOS 18.1, Apple Intelligence introduced Writing Tools that work across all text fields, including Keynote. These tools let you rewrite, proofread, and summarize selected text directly on a slide without leaving the app.

Here is what you can do right now with Apple Intelligence in Keynote:

| Feature | What It Does | Where It Works | |---|---|---| | Rewrite | Rewrites selected text in a different tone (friendly, professional, concise) | Any text box on a slide | | Proofread | Catches grammar, spelling, and style issues with inline suggestions | Any text box | | Summarize | Condenses long text into key points | Presenter notes, text boxes | | Smart Reply | Suggests responses in collaboration comments | Shared presentations | | Image Playground | Generates stylized images from text prompts | Insert as slide images | | Clean Up (Photos) | Removes unwanted objects from photos used in slides | Imported photos |

To use Writing Tools, select text on any slide, right-click, and choose Writing Tools from the context menu. You can also use the keyboard shortcut by selecting text and pressing the Writing Tools button in the toolbar.

Note

Apple Intelligence requires an M1 chip or later (or A17 Pro on iPhone). If your Mac predates 2020, these features will not appear in Keynote.

What Keynote AI Cannot Do (Yet)

The gap between what people expect from "keynote ai" and what Apple ships today is wide. Apple Intelligence does not generate slides, suggest layouts, create entire presentations from a prompt, or auto-design based on your content. Compare that to what competitors offer:

| Capability | Keynote + Apple Intelligence | Google Slides + Gemini | PowerPoint + Copilot | |---|---|---|---| | Generate full presentation from prompt | No | Yes | Yes | | Suggest slide layouts | No | Yes | Yes | | Auto-format text and images | No | Limited | Yes | | Rewrite/proofread text | Yes | Yes | Yes | | Generate images from text | Yes (Image Playground) | Yes (Gemini) | Yes (DALL-E) | | Speaker notes from slides | No | Yes | Yes | | Summarize presentation | Partial (text only) | Yes | Yes |

This table explains why so many people search for "keynote ai" and land on third-party tools. Apple's approach is text-level assistance, not presentation-level intelligence.

Automating Keynote With Shortcuts and AppleScript

Before reaching for third-party AI tools, consider what you can automate natively. Keynote has deep AppleScript and Shortcuts support that most users never touch.

AppleScript: Batch Operations

AppleScript can create slides, set text, apply themes, export to PDF, and manipulate every element on a slide. Here is a script that creates a presentation with a title slide and three content slides:

tell application "Keynote"
    activate
    set newDoc to make new document with properties {document theme:theme "Basic White"}
    
    tell newDoc
        -- Set title slide
        tell slide 1
            set object text of default title item to "Quarterly Review"
            set object text of default body item to "Q1 2026 Results"
        end tell
        
        -- Add content slides
        repeat with i from 1 to 3
            set newSlide to make new slide with properties {base layout:slide layout "Title & Bullets"}
            tell newSlide
                set object text of default title item to "Section " & i
                set object text of default body item to "Content goes here"
            end tell
        end repeat
    end tell
end tell

Shortcuts: AI-Powered Pipelines

The real power comes from chaining Shortcuts actions. You can feed text through an AI service (ChatGPT, Claude, or a local model via Ollama), then pass the result into Keynote actions:

1. Get text from clipboard (your outline)
2. Ask ChatGPT: "Turn this outline into 5 slide titles with 3 bullet points each, output as JSON"
3. Parse JSON
4. For each item: Add slide to Keynote, set title, set body text
5. Export to PDF

This approach works today on macOS 14+ with the ChatGPT Shortcuts integration. It is not as polished as Copilot's one-click generation, but it gives you full control over the prompt and output format.

Using AI Agents to Control Keynote

The most powerful approach to "keynote AI" is not a plugin or a built-in feature. It is an AI agent that can see your screen, understand what is in Keynote, and operate it directly through macOS accessibility APIs.

You: "Make a5-slide deck"AI Agent(screen + a11y)Keynote.app(native control)reads screen, clicks menus, types text, drags elementsNo API needed. The agent operates Keynote like a human would.

This is fundamentally different from a plugin that calls an API. An AI agent that operates at the macOS level can:

Open Keynote, create a new presentation, and pick a theme
Read existing slides and understand their content visually
Add, reorder, and delete slides using the slide navigator
Type text, format it, and position elements by dragging
Insert images, charts, and shapes through Keynote's own menus
Export to PDF, PowerPoint, or any format Keynote supports

The key difference is that a native AI agent does not need Keynote to expose an API or install a plugin. It works with Keynote exactly as it ships from Apple, using the same accessibility tree and input events that a human uses.

How Native macOS Agents Interact With Keynote

On macOS, every app exposes its UI through the Accessibility framework (AXUIElement). An AI agent reads this tree to understand what is on screen: menus, buttons, text fields, slide thumbnails. It then sends synthetic clicks, keystrokes, and drags to operate the app.

// Read the accessibility tree of Keynote
import ApplicationServices

let app = AXUIElementCreateApplication(pid)
var value: AnyObject?
AXUIElementCopyAttributeValue(app, kAXWindowsAttribute as CFString, &value)

// Get the focused element (the text box you're editing)
AXUIElementCopyAttributeValue(app, kAXFocusedUIElementAttribute as CFString, &value)

This approach means the agent can work with any version of Keynote, on any macOS release that supports accessibility. There is no dependency on Apple shipping new AI features.

Building a Keynote AI Workflow With Fazm

Here is a practical workflow that combines an AI agent with Keynote to create a presentation from a text outline:

Step 1: Write your outline in any text editor. Keep it simple: one line per slide, indented lines for bullet points.

Step 2: Tell the AI agent what you want. For example: "Open Keynote, create a new presentation with the Gradient theme, and build slides from this outline."

Step 3: The agent opens Keynote, selects the theme, and creates each slide by navigating the app's UI. It types your content, adjusts formatting, and moves between slides using the slide navigator.

Step 4: Review the result. The agent can also make edits: "Make the title on slide 3 bigger" or "Move the image on slide 5 to the right."

This is not a hypothetical workflow. Native macOS AI agents that can control desktop apps through accessibility APIs exist today.

Common Pitfalls

  • Expecting ChatGPT plugins to control Keynote directly. Browser-based AI tools cannot interact with native macOS apps. They can generate text or outlines, but someone (or something) still needs to put that content into Keynote. A native agent bridges this gap.

  • Using Automator instead of Shortcuts. Apple deprecated Automator in favor of Shortcuts. Automator workflows still run, but they will not receive new actions or AI integrations. Build new automations in Shortcuts.

  • Ignoring presenter notes. AI writing tools in Keynote work on presenter notes too. If you spend time polishing slide text but leave notes blank, you are missing half the value. Use Writing Tools to generate speaker notes from your slide content.

  • Overloading slides with AI-generated text. AI makes it easy to produce paragraphs. Good slides have 3 to 5 bullet points per slide, not walls of text. Use AI to condense, not expand.

Quick Checklist for Keynote AI Workflows

  1. Confirm your Mac has an M1 chip or later (required for Apple Intelligence)
  2. Update to macOS 15.1+ and enable Apple Intelligence in System Settings
  3. Use Writing Tools (select text, right-click) for rewriting and proofreading
  4. Use Image Playground for generating slide images from prompts
  5. Build Shortcuts pipelines for batch slide creation from outlines
  6. Consider a native macOS AI agent for full presentation automation
  7. Always review AI output before presenting; check facts, tone, and formatting

Wrapping Up

"Keynote AI" today means three things: Apple Intelligence's text tools for rewriting and proofreading, Shortcuts pipelines that chain AI services with Keynote actions, and native macOS agents that control Keynote directly through accessibility APIs. Apple's built-in features handle the basics, but the real power comes from agents that can operate Keynote the same way you do, without waiting for Apple to ship new features.

Fazm is an open source macOS AI agent that can control native apps like Keynote. Open source on GitHub.

Related Posts