MCP Server
Herd’s MCP Server allows you to securely expose web applications to Large Language Models (LLMs) through local browser automation. Using the Model Context Protocol (MCP), you can create a secure bridge between AI models and your favorite websites without sharing credentials or running browsers in the cloud.
Key Benefits
- Secure Access: Your browser runs locally, keeping your credentials and cookies secure
- Privacy First: No need to share sensitive data or tokens with third-party services
- Native Experience: Interact with web apps through your actual browser, maintaining all your preferences and login state
- Universal Compatibility: Works with any web application without needing API access
- Custom Tools: Create tailored tools that encapsulate complex web interactions
Installation
npm install @monitoro/herd
Basic Setup
Here’s how to create an MCP server that exposes web application functionality:
import { HerdMcpServer } from '@monitoro/herd';
const server = new HerdMcpServer({
info: {
name: "gmail-assistant",
version: "1.0.0",
description: "Gmail automation tools for LLMs"
},
transport: {
type: "sse",
port: 3000
},
herd: {
token: "your-herd-token" // Get from herd.garden
}
});
// Start the server
await server.start();
Creating Web App Tools
Tools encapsulate web application functionality for LLMs. Here are some examples:
// Gmail: Compose new email
server.tool({
name: "composeEmail",
description: "Compose and send a new email",
schema: {
to: z.string().email(),
subject: z.string(),
body: z.string()
}
}, async ({ to, subject, body }, devices) => {
const device = devices[0];
const page = await device.newPage();
// Navigate to Gmail compose
await page.goto('https://mail.google.com/mail/u/0/#compose');
// Fill out the email form
await page.type('input[aria-label="To"]', to);
await page.type('input[aria-label="Subject"]', subject);
await page.type('div[aria-label="Message Body"]', body);
// Send the email
await page.click('div[aria-label="Send"]');
return { success: true };
});
// Twitter: Post a tweet
server.tool({
name: "postTweet",
description: "Post a new tweet",
schema: {
content: z.string().max(280)
}
}, async ({ content }, devices) => {
const device = devices[0];
const page = await device.newPage();
await page.goto('https://twitter.com/compose/tweet');
await page.type('div[aria-label="Tweet text"]', content);
await page.click('div[data-testid="tweetButton"]');
return { success: true };
});
Creating Web App Resources
Resources provide structured data from web applications:
// Gmail: Unread emails
server.resource({
name: "unreadEmails",
uriOrTemplate: "gmail/unread",
}, async (devices) => {
const device = devices[0];
const page = await device.newPage();
await page.goto('https://mail.google.com/mail/u/0/#inbox');
// Extract unread email information
const emails = await page.evaluate(() => {
return Array.from(document.querySelectorAll('tr.unread'))
.map(row => ({
sender: row.querySelector('.sender').textContent,
subject: row.querySelector('.subject').textContent,
date: row.querySelector('.date').textContent
}));
});
return { emails };
});
// LinkedIn: Profile Information
server.resource({
name: "linkedinProfile",
uriOrTemplate: "linkedin/profile/{username}",
}, async ({ username }, devices) => {
const device = devices[0];
const page = await device.newPage();
await page.goto(`https://www.linkedin.com/in/${username}`);
// Extract profile information
return await page.extract({
name: 'h1',
headline: '.headline',
about: '.about-section p',
experience: '.experience-section li'
});
});
Complete Example: Twitter Assistant
Here’s a complete example showing how to create an MCP server that provides Twitter automation capabilities to LLMs:
import { HerdMcpServer } from '@monitoro/herd';
import { z } from 'zod';
const server = new HerdMcpServer({
info: {
name: "twitter-assistant",
version: "1.0.0",
description: "Twitter automation for LLMs"
},
transport: {
type: "sse",
port: 3000
},
herd: {
token: process.env.HERD_TOKEN
}
});
// Post a tweet
server.tool({
name: "postTweet",
description: "Post a new tweet",
schema: {
content: z.string().max(280)
}
}, async ({ content }, devices) => {
const device = devices[0];
const page = await device.newPage();
await page.goto('https://twitter.com/compose/tweet');
await page.type('div[aria-label="Tweet text"]', content);
await page.click('div[data-testid="tweetButton"]');
return { success: true };
});
// Get timeline
server.resource({
name: "timeline",
uriOrTemplate: "twitter/timeline",
}, async (devices) => {
const device = devices[0];
const page = await device.newPage();
await page.goto('https://twitter.com/home');
return await page.extract({
tweets: {
_$r: 'article[data-testid="tweet"]',
author: '[data-testid="User-Name"]',
content: '[data-testid="tweetText"]',
stats: {
likes: '[data-testid="like"]',
retweets: '[data-testid="retweet"]'
}
}
});
});
// Like a tweet
server.tool({
name: "likeTweet",
description: "Like a tweet by its URL",
schema: {
tweetUrl: z.string().url()
}
}, async ({ tweetUrl }, devices) => {
const device = devices[0];
const page = await device.newPage();
await page.goto(tweetUrl);
await page.click('div[data-testid="like"]');
return { success: true };
});
// Start the server
server.start().then(() => {
console.log('Twitter Assistant ready!');
}).catch(console.error);
This setup allows LLMs to:
- Post tweets
- Read the timeline
- Like tweets
- All through your local browser, maintaining your security and privacy
Read more about Model Context Protocol in the MCP official documentation.
No headings found
Last updated: 3/31/2025