Using Perplexity from the command line via llm

My favorite search (or rather “answer”) engine is Perplexity, and my favorite way to interact with computers is the command-line, so why not combine the two?
Here’s how:
Prerequisites
- You need a Perplexity account and a payment method so you can obtain an API key from them. Perplexity‘s model pricing is similar to other LLM providers, ranging from $1 to $15 per million tokens depending on which model you use. I am a Perplexity Pro subscriber which includes some $5 worth of API usage credits per month—enough for most casual users like me.
- Any system that runs Python.
- A love for all things command-line!
Here are the steps
Step 1: Install Simon Willison’s llm command-line tool
You should install it in any case—it’s awesome!
llm
is a comprehensive CLI utility for accessing almost any LLM from the command-line.
Thanks to the magic of Astral’s uv, it’s easy to install, too.
> uv tool install llm
That’s it. Feel free to check the llm documentation to learn about its powerful features.
Step 2: Install the llm-perplexity plugin
Which is just as easy:
> llm install llm-perplexity
Now, you can check which additional models are available to you on llm
, beyond the default ones:
❯ llm models
OpenAI Chat: gpt-4o (aliases: 4o)
OpenAI Chat: chatgpt-4o-latest (aliases: chatgpt-4o)
…
(lots of other OpenAI models…)
…
Perplexity: sonar-deep-research
Perplexity: sonar-reasoning-pro
Perplexity: sonar-reasoning
Perplexity: sonar-pro
Perplexity: sonar
Perplexity: r1-1776
…
(maybe some other models, too, if you have more plugins installed…)
All those sonar
models come with Perplexity’s search integration built-in!
Step 3: Add your Perplexity API key
Go to your Perplexity account page’s API keys section and create a key there, if you haven’t already. Copy it into your clipboard, then:
> llm keys set perplexity
(Paste your key into your terminal window when it asks you to do so. It’ll remain invisible.)
It goes without saying that you should treat API keys carefully. You can use llm keys path
to see where llm
stores its keys and check if that’s secure enough for you. If you prefer using environment variables for your API keys, you can use the --key $YOUR_API_KEY_HERE
option. Learn more about llm
’s key management here.
That’s it! Now you can search on Perplexity right from your command-line.
❯ llm -m sonar-pro "Which chillout summer tracks are trending right now?"
Some of the **trending chillout summer tracks in 2025** include a mix of
relaxed house and deep, summery beats popular across streaming platforms
and social media. Tracks gaining attention right now feature both
emerging artists and established chillout names:
- **Unora – Lonely (feat. Chris Crone)**
- **Novino – Endlessly (feat. Ladina Viva)**
- **SRTW, YVO – I Want It All**
- **NLSN – Lay in My Arms (feat. OLIM)**
- **Mauve – Not Alone**
- **Luvine – Lifeline (feat. LIZ LUNE)**
- **Shiek – Counting Stars (feat. RED)**
- **Noile, nourii – By Your Side**
- **Leviro – Summer Day**
- **SRTW, Nimus – Feel It Still**
- **Nimus – Butterflies**[1][2]
On popular playlists like "Beach Chill 2025" and "Deep House Summer
Chillout Mix," additional trending tracks and artists include:
- **Pulsea**
- **Nova**
- **Adam Port, Stryv, Keinemusik, Orso, Malachiii – Move**
- **Billie Eilish – Birds of a Feather**
- **Lost Frequencies, Bandit – Dance In The Sunlight**[3][5]
These tracks are making rounds on **Spotify, TikTok, and YouTube**,
especially in mixes titled "Chillout Vibes 2025," "Relaxing Music for
Hot Days," and "Ibiza Summer Mix." The vibe is mellow, sun-soaked, and
perfect for hot afternoons, whether relaxing, working, or socializing
by the beach[1][2][3][4][5].
## Citations:
[1] https://www.youtube.com/watch?v=AWUUAajD5VQ
[2] https://www.youtube.com/watch?v=6aeEQ54xu90
[3] https://www.youtube.com/watch?v=Kap6tOpNXvw
[4] https://www.youtube.com/watch?v=rlVtiaF2SPg
[5] https://open.spotify.com/playlist/7IGgZbBQiT4DThMyGxAlBX
Bonus: Make it an alias!
Depending on your favorite shell, you can make this an alias too.
For Bash:
bash-3.2$ alias pplx="llm -m sonar-pro"
bash-3.2$ pplx "What’s the weather forecast for Munich today?"
The **weather forecast for Munich today, July 10, 2025**, calls for
**warmer temperatures with some sun** throughout the day and the
possibility of a **morning shower in places**. The expected **high is
around 70–73°F (21–23°C)**, and the **low tonight will be about 54–55°F
(12–13°C)**. Conditions will be **partly to mostly cloudy** by
evening[1][2].
The weather is **generally clear** in the early morning, with
temperatures gradually warming as the day progresses[2].
## Citations:
[1] https://www.accuweather.com/en/de/munich/80331/weather-forecast/178086
[2] https://www.timeanddate.com/weather/germany/munich
[3] https://www.accuweather.com/en/de/munich/80331/daily-weather-forecast/178086
[4] https://weather.metoffice.gov.uk/forecast/u281yf6ky
[5] https://www.wunderground.com/forecast/de/munich
I prefer Fish, so for me it means adding the following to my ~/.config/fish/config.fish
:
function pplx
llm -m sonar-pro "$argv"
end
More fun with llm
Even if you’ve been using llm
for a while, there’s still lots to learn. For example, I recently watched Christopher Smith’s video: Become a command-line superhero with Simon Willison’s llm
tool and learned about some other cool tools that go well with it:
strip-tags
This handy tool, also from Simon Willison, strips all HTML tags from its input. Installation is similarly simple:
> uv tool install strip-tags
You can use it like this:
❯ curl -s "https://docs.aws.amazon.com/nova/latest/userguide/image-generation.html" | \
strip-tags | llm -m nova-pro "Turn this into a Markdown document"
# Amazon Nova User Guide for Amazon Nova
## Generating Images with Amazon Nova Canvas
With the Amazon Nova Canvas model, you can generate realistic,
studio-quality images using text prompts. You can use Amazon Nova
Canvas for text-to-image and image editing applications.
### Amazon Nova Canvas Features
- **Text-to-image (T2I) generation**: Input a text prompt and generate
a new image as output. The generated image captures the concepts
described by the text prompt.
- **Image conditioning**: Uses an input reference image to guide image
generation. The model generates an output image that aligns with the
layout and composition of the reference image, while still following
the textual prompt.
… and so on.
And yes, there’s an llm
plugin for Amazon Nova Models on Amazon Bedrock, too, though it could use an update: llm-bedrock
Repomix
This tool converts an entire git
repository into a single, LLM-friendly XML file. This is great for coding projects!
> uv tool install repomix
❯ repomix -o - --include "templates/**,themes/**" | \
llm -m claude-4-sonnet "Read this repo summary carefully, then suggest \
the top 3 technical SEO-related improvements for my Zola-based blog."
Based on the repository summary, I can see you have a Zola-based blog
with a custom theme (ct5) that includes HTMX, Alpine.js, and
webmentions functionality. Here are the top 3 technical SEO
improvements I'd suggest:
## 1. **Optimize JavaScript Loading & Core Web Vitals**
Your largest files are minified JavaScript libraries (95KB+ combined
for HTMX and Alpine.js). Consider:
- **Lazy load non-critical JS**: Load Alpine.js and HTMX only when needed
- **Implement resource hints**: Add `<link rel="preload">` for critical
JS and `<link rel="prefetch">` for non-critical resources
- **Code splitting**: Break up your 18KB `header_image.js` into
smaller, page-specific chunks
- **Use `defer` or `async` attributes** to prevent render-blocking
## 2. **Implement Structured Data & Rich Snippets**
With webmentions already in place, you're clearly focused on
social/semantic web features. Enhance this with:
- **JSON-LD structured data** for articles, author information, and
organization markup
- **Article schema** with proper `datePublished`, `dateModified`,
`author`, and `publisher` fields
- **Breadcrumb schema** for better navigation understanding
- **FAQ or HowTo schema** where applicable for your content
## 3. **Optimize Header Image Handling**
Your large `header_image.html` macro (31KB) and custom JS suggest
complex image handling. Improve with:
- **Responsive images**: Implement `srcset` and `sizes` attributes
for different viewport sizes
- **Next-gen image formats**: Use WebP/AVIF with fallbacks
- **Lazy loading**: Add `loading="lazy"` for below-the-fold images
- **Proper image dimensions**: Include `width` and `height`
attributes to prevent layout shift (CLS)
These improvements will enhance your Core Web Vitals scores, search
engine understanding, and overall technical SEO performance while
maintaining your existing functionality.
(You probably guessed it: there’s an llm-anthropic plugin, too.)
Well, I guess I still have some more stuff to do around here…