Enhancing the VS Code Agent Mode to integrate with Local tools using Model Context Protocol (MCP)

Building a Todo List Server with Model Context Protocol (MCP)

GitHub Repository Node Version MCP SDK

This blog post walks through creating a Todo List server using the Model Context Protocol (MCP), demonstrating how to build AI-friendly tools that integrate seamlessly with VS Code.

πŸ”— Source Code: The complete implementation is available on GitHub

Table of Contents

The Evolution of AI-Assisted Development

I have been using VS Code with GitHub Copilot for development purposes. The introduction of text-based chat, which brought GPT capabilities directly into the IDE, was revolutionary.

GitHub Copilot and the Local Tools Gap

GitHub Copilot has revolutionized how developers write code by providing intelligent code suggestions and completions. While it excels at understanding code context and generating relevant snippets, there has been a notable gap in its ability to interact with local development tools, intranet KBs, and execute actions in the development environment. This limitation means that while Copilot can suggest code, it cannot directly help with tasks like running commands, managing files, or interacting with local services.

Agent Mode: Bridging the Gap

The introduction of Agent Mode in GitHub Copilot represents a significant step forward in AI-assisted development. It enables Copilot to:

  • Execute terminal commands
  • Modify files directly
  • Interact with the VS Code environment
  • Handle project-specific tasks

This advancement transformed Copilot from a passive code suggestion tool into an active development partner that can help manage your entire development workflow. Here are some powerful capabilities and example interactions:

1. Build and Test Automation

Trigger Maven builds with specific profiles:

"Run mvn clean install with the 'production' profile for my project"

Execute JUnit test suites:

"Execute all JUnit tests in the UserServiceTest class"

Run code quality tools:

"Run ESLint on all JavaScript files in the src directory"
"Start a local Sonar analysis with coverage and security scan"

2. Documentation and Release Management

Generate release documentation:

"Generate release notes for changes between tag v1.2.0 and v1.3.0"

Technical documentation:

"Create a technical design document for the authentication service"
"Update the API documentation in Confluence for the new endpoints"

3. Project Management Integration

JIRA ticket management:

"Create a JIRA ticket for the memory leak bug we found in the login service"
"Convert all TODO comments in AuthService.java to JIRA tickets"

Sprint management:

"Update the status of PROJ-123 to 'In Review' and add a comment with the PR link"
"Show me all JIRA tickets assigned to me that are blocking the current sprint"

4. Cross-Repository Operations

Multi-repo analysis:

"Check if the latest changes in the API repo are compatible with our client library"
"Run integration tests across both the frontend and backend repositories"

While these capabilities demonstrate the power of Agent Mode, they highlight a crucial challenge: the need for external API integrations. Each of these tasks requires:

  • Authentication with external services (JIRA, Confluence, Sonar)
  • Managing different API versions and endpoints
  • Handling various authentication methods
  • Maintaining connection states and sessions
  • Coordinating operations across multiple systems

This complexity creates significant overhead for developers who need to:

  1. Implement and maintain integration code for each service
  2. Handle authentication and authorization
  3. Manage API versioning and changes
  4. Deal with different response formats and error handling

MCP: The New Standard for AI Tool Integration

Model Context Protocol (MCP) emerges as the next evolution in this space, providing a standardized way for AI models to interact with development tools and services. Unlike traditional approaches where AI assistants are limited to suggesting code, MCP enables:

  1. Direct Tool Integration

    • AI models can directly invoke local tools
    • Real-time interaction with development environment
    • Standardized communication protocol
  2. Extensible Architecture

    • Custom tool definitions
    • Plugin-based system
    • Easy integration with existing services
  3. Development Environment Awareness

    • Context-aware assistance
    • Access to local resources
    • Real-time feedback loop

What is Model Context Protocol (MCP)?

Model Context Protocol (MCP) is a specification that enables AI models to interact with external tools and services in a standardized way. It defines how tools can expose their functionality through a structured interface that AI models can understand and use.

Key benefits of MCP that I have personally benefited from:

  • Standardized tool definitions with JSON Schema
  • Real-time interaction capabilities
  • Session management
  • Built-in VS Code integration

More about MCP Architecture Documentation

How MCP Tools Work

Each MCP tool follows a standardized structure:

{
  name: "toolName",
  description: "What the tool does",
  parameters: {
    // JSON Schema definition of inputs
  },
  returns: {
    // JSON Schema definition of outputs
  }
}

When an AI model wants to use a tool:

  1. It sends a request with the tool name and parameters
  2. The MCP server validates the request
  3. The tool executes with the provided parameters
  4. Results are returned in a standardized format

This structured approach ensures:

  • Consistent tool behavior
  • Type safety throughout the system
  • Easy tool discovery and documentation
  • Predictable error handling

Architecture Overview

Here’s how the different components interact in our MCP Todo implementation:

graph TD
    A[GitHub Copilot] -->|Natural Language Commands| B[VS Code]
    B -->|MCP Protocol| C[MCP Todo Server]
    C -->|CRUD Operations| D[(LowDB/Database)]
    C -->|Real-time Updates| B
    B -->|Command Results| A

TODO MCP Server Components

Prerequisites

To follow along, you’ll need:

  • Node.js (v22 or higher)
  • VS Code
  • Basic understanding of Express.js
  • npm or yarn package manager

Setting Up the Project

  1. First, create a new project and initialize npm:
mkdir mcp-todo-server
cd mcp-todo-server
npm init -y
  1. Install required dependencies:
npm install @modelcontextprotocol/sdk express lowdb zod

For this demonstration, we’re using lowdb to manage tasks in a JSON file without actual integration with an external system. In a production environment, the lowdb functions can be replaced with actual JIRA CRUD API calls for end-to-end implementation.

  1. Create the basic directory structure:
mcp-todo-server/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ config/
β”‚   β”œβ”€β”€ tools/
β”‚   β”œβ”€β”€ utils/
β”‚   └── server.js
└── package.json

Implementing the MCP Server

1. Basic Server Setup

We started with a basic Express server that implements the MCP protocol. The server uses StreamableHTTP for real-time communication and session management.

Key components in server.js:

  • Express server setup
  • MCP SDK integration
  • StreamableHTTP transport configuration
  • Session management for maintaining tool state

2. Database Configuration

We used lowdb, a lightweight JSON database, to persist our todos. The database configuration in config/db.js handles:

  • JSON file storage
  • Basic CRUD operations
  • Data persistence between server restarts

3. Implementing Todo Tools

We implemented four main tools for managing todos:

  1. createTodo

    • Creates new todo items
    • Validates input using Zod schema
    • Returns the created todo with a unique ID
  2. listTodos

    • Lists all todos or filters by completion status
    • Formats output for easy reading
    • Supports real-time updates
  3. updateTodo

    • Updates todo completion status
    • Validates input parameters
    • Returns updated todo information
  4. deleteTodo

    • Removes todos by ID
    • Provides completion confirmation
    • Handles error cases gracefully

VS Code Integration

To enable VS Code to use our MCP server, follow these steps:

  1. Enable Agent mode in VS Code. Click on the drop down just before the model listing and select agent from it.

Enable Agent Mode in VS Code

  1. Then click on the Gear icon next to the speaker icon in the above image and select β€œAdd more tools” then select β€œAdd MCP Server”

Add MCP Server

  1. Then select HTTP or Server-Sent Events and provide the URL based on the server we created. In this case, it’s http://localhost:3000. Then select a name for the server.

Select HTTP or Server-Sent events

Provide name

Select settings

  1. Alternatively you can Press Ctrl+Shift+P (Windows/Linux) or Cmd+Shift+P (Mac) to open the Command Palette. Type β€œOpen Settings (JSON)” and select it and add the following configuration:
{
    "mcp": {
        "servers": {
            "my-mcp-server": {
                "url": "http://localhost:3000/mcp"
            }
        }
    }
}

You can use the 4th step to verify if the server is added correctly after the first 3 steps are done. The User Settings option has Start, Stop, and Restart options. This step helped me identify if there are any issues with the MCP tools server effectively.

Settings JSON

  1. Reload VS Code to apply the changes or use the Start, Stop, Restart options in the settings.json as shown above.

  2. After successful addition of the MCP server, you should see the tools listed when you click the gear icon in the Copilot chat window.

Tools listed successfully

Using the Todo MCP Server

Here are some example prompts you can use in VS Code with GitHub Copilot to interact with the todo server. Each example includes a screenshot of the actual interaction:

  1. Creating a Todo

    Prompt: "Create a new todo item called 'Review PR #123'"
    Response: Successfully created todo "Review PR #123"
    

    Creating a new todo Execute Add Create TODO success

  2. Listing Todos

    Prompt: "Show me all my todos"
    Response: Here are your todos:
    - Review PR #123 (Not completed)
    - Update documentation (Completed)
    - Setup test environment (Not completed)
    

    Listing all TODOs execute Listing all TODOs Listing all TODOs

  3. Updating a Todo

    Prompt: "Mark the todo about PR review as completed"
    Response: Updated "Review PR #123" to completed
    
  4. Deleting a Todo

    Prompt: "Delete the todo about documentation"
    Response: Successfully deleted "Update documentation"
    
  5. Filtering Todos

    Prompt: "Show me only completed todos"
    Response: Completed todos:
    - Review PR #123
    

Next Steps and Improvements

Potential enhancements for the project:

  1. Authentication

    • Add user authentication
    • Implement role-based access
  2. Advanced Features

    • Due dates for todos
    • Categories/tags
    • Priority levels
  3. Performance

    • Caching
    • Database optimization
    • Rate limiting
  4. Testing

    • Unit tests
    • Integration tests
    • Load testing

Troubleshooting

Common Issues and Solutions

  1. Server Connection Issues

    • Verify the server is running on port 3000
    • Check VS Code settings for correct server URL
    • Ensure no firewall blocking the connection
  2. Tool Registration Problems

    Error: Tool 'createTodo' not found
    Solution: Check if server is properly initializing tools in server.js
    
  3. Schema Validation Errors

    • Ensure todo items match the required schema
    • Check Zod validation rules in tool implementations
    • Verify JSON payload format
  4. Real-time Updates Not Working

    • Confirm SSE (Server-Sent Events) connection is established
    • Check browser console for connection errors
    • Verify StreamableHTTP transport configuration

Source Code Reference

Key implementation files:

Conclusion

We’ve successfully built a fully functional MCP-compatible Todo server that:

  • Implements CRUD operations
  • Maintains persistent storage
  • Provides real-time updates
  • Integrates seamlessly with VS Code

This implementation serves as a great starting point for building more complex MCP tools and understanding how AI models can interact with custom tools through the Model Context Protocol.

Resources

mcp-todo-server

This project is a simple MCP server that manages a todo list using the Model Context Protocol TypeScript SDK. It provides a RESTful API for creating, updating, and deleting todo items.

Project Structure

mcp-todo-server
β”œβ”€β”€ src
β”‚   β”œβ”€β”€ resources
β”‚   β”‚   └── todos.js
β”‚   β”œβ”€β”€ tools
β”‚   β”‚   β”œβ”€β”€ createTodo.js
β”‚   β”‚   β”œβ”€β”€ updateTodo.js
β”‚   β”‚   └── deleteTodo.js
β”‚   β”œβ”€β”€ config
β”‚   β”‚   └── db.js
β”‚   β”œβ”€β”€ utils
β”‚   β”‚   └── sessionManager.js
β”‚   └── server.js
β”œβ”€β”€ db.json
β”œβ”€β”€ package.json
β”œβ”€β”€ .gitignore
└── README.md

Installation

  1. Clone the repository:

    git clone https://github.com/prakashm88/mcp-todo-server.git
    cd mcp-todo-server
    
  2. Install the dependencies:

    npm install
    

Usage

To start the server, run:

npm start

The server will listen on port 3000.

API Endpoints

  • POST /mcp: Handles client-to-server communication for creating and managing todos.
  • GET /mcp: Retrieves server-to-client notifications.
  • DELETE /mcp: Terminates a session.

Database

The project uses lowdb to manage the todo items, stored in db.json.

Contributing

Contributions are welcome! Here’s how you can help:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/my-new-feature
  3. Commit your changes: git commit -am 'Add new feature'
  4. Push to the branch: git push origin feature/my-new-feature
  5. Submit a pull request

You can also:

  • Report bugs by creating issues
  • Suggest improvements through discussions
  • Help improve documentation

Please read our Contributing Guidelines for more details.

Email or Website Summarization Browser Extension

Browser Extensions

Continuing on the journey of browser extension, lets see how a browser extension can help with Email or Website summarization using a Generative AI API integration.

I used NodeJS as my backend to create a API based on VertexAI for summarization. Here is the a documentation to create an API using VertexAI.

Now getting into extension development, basic development content is already available in the blog post here and the important part if the popup.js and content.js

popup.js

document.getElementById("summarizeEmail").addEventListener("click", () => {
chrome.tabs.query({ active: true, currentWindow: true }, (tabs) => {
chrome.tabs.sendMessage(tabs[0].id, { action: "getEmailContent" }, (response) => {
if (response.emailContent) {
chrome.runtime.sendMessage({ action: "summarizeEmail", emailContent: response.emailContent }, (response) => {
document.getElementById("summary").innerText = response.summary;
});
}
});
});
});

document.getElementById("summarizeText").addEventListener("click", () => {
chrome.tabs.query({ active: true, currentWindow: true }, (tabs) => {
chrome.scripting.executeScript(
{
target: { tabId: tabs[0].id },
function: getSelectedText,
},
(results) => {
if (results && results[0] && results[0].result) {
chrome.runtime.sendMessage({ action: "summarizeText", textContent: results[0].result }, (response) => {
document.getElementById("summary").innerText = response.summary;
});
}
}
);
});
});

function getSelectedText() {
return window.getSelection().toString();
}

content.js

// Function to inject the "AI Summary" button into Gmail
const injectAISummaryButton = () => {
const existingButton = document.getElementById("ai-summary-button");
if (existingButton) {
return; // Button already exists, do not add it again
}

const targetElement = document.querySelector(".AO");
if (targetElement) {
const aiSummaryButton = document.createElement("button");
aiSummaryButton.id = "ai-summary-button";
aiSummaryButton.innerText = "AI Summary";
aiSummaryButton.style.position = "absolute";
aiSummaryButton.style.top = "10px";
aiSummaryButton.style.right = "10px";
aiSummaryButton.style.zIndex = 10000;
aiSummaryButton.style.backgroundColor = "#007bff";
aiSummaryButton.style.color = "#ffffff";
aiSummaryButton.style.border = "none";
aiSummaryButton.style.padding = "10px";
aiSummaryButton.style.cursor = "pointer";

aiSummaryButton.addEventListener("click", () => {
const emailContent = getEmailContent();
if (emailContent) {
chrome.runtime.sendMessage({ action: "summarizeEmail", emailContent: emailContent }, (response) => {
showAISummaryOverlay(response.summary);
});
}
});

targetElement.prepend(aiSummaryButton);
}
};

// Function to extract email content from Gmail's DOM
const getEmailContent = () => {
const emailContentElement = document.querySelector(".AO"); // Selector for the email body content
return emailContentElement ? emailContentElement.innerText : "";
};

// Function to create and show the AI Summary overlay
const showAISummaryOverlay = (summary) => {
// Remove existing overlay if present
const existingOverlay = document.getElementById("ai-summary-overlay");
if (existingOverlay) {
existingOverlay.remove();
}

// Create overlay elements
const overlay = document.createElement("div");
overlay.id = "ai-summary-overlay";
overlay.style.position = "fixed";
overlay.style.top = "0";
overlay.style.left = "0";
overlay.style.width = "100%";
overlay.style.height = "100%";
overlay.style.backgroundColor = "rgba(0, 0, 0, 0.7)";
overlay.style.zIndex = 10000;
overlay.style.display = "flex";
overlay.style.alignItems = "center";
overlay.style.justifyContent = "center";

const content = document.createElement("div");
content.style.backgroundColor = "white";
content.style.padding = "20px";
content.style.borderRadius = "10px";
content.style.maxWidth = "500px";
content.style.boxShadow = "0 0 10px rgba(0, 0, 0, 0.5)";

const title = document.createElement("h2");
title.innerText = "AI Summary";
title.style.marginTop = "0";

const summaryText = document.createElement("p");
summaryText.innerText = summary;

const closeButton = document.createElement("button");
closeButton.innerText = "Close";
closeButton.style.marginTop = "20px";
closeButton.style.padding = "10px";
closeButton.style.backgroundColor = "#007bff";
closeButton.style.color = "white";
closeButton.style.border = "none";
closeButton.style.borderRadius = "5px";
closeButton.style.cursor = "pointer";

closeButton.addEventListener("click", () => {
overlay.remove();
});

// Append elements
content.appendChild(title);
content.appendChild(summaryText);
content.appendChild(closeButton);
overlay.appendChild(content);
document.body.appendChild(overlay);
};

// Inject the "AI Summary" button when the content script is loaded
injectAISummaryButton();

// Observe changes in the Gmail DOM to inject the button when necessary
const observer = new MutationObserver((mutations) => {
for (const mutation of mutations) {
if (mutation.type === "childList") {
injectAISummaryButton();
}
}
});

observer.observe(document.body, { childList: true, subtree: true });

// Listen for messages from the popup or background script
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
if (request.action === "getEmailContent") {
const emailContent = getEmailContent();
sendResponse({ emailContent: emailContent });
}
});

background.js

const API_ENDPOINT = "http://localhost:3001/secure/ai";

const getSummary = (content, callback) => {
const encodedContent = encodeURIComponent(content);

fetch(API_ENDPOINT + "/summary", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({ prompt: "Please summarize the below uri encoded content. \n\n" + content }),
})
.then((response) => {
if (!response.ok) {
throw new Error("Network response was not ok " + response.statusText);
}
return response.json();
})
.then((data) => {
if (callback && typeof callback === "function") {
callback(data);
}
})
.catch((error) => {
console.error("Error: " + error);
if (callback && typeof callback === "function") {
callback({ summary: "Error fetching summary." });
}
});
};

chrome.runtime.onInstalled.addListener(() => {
chrome.contextMenus.create({
id: "summarizeText",
title: "AI Summary",
contexts: ["selection"],
});
});

chrome.contextMenus.onClicked.addListener((info, tab) => {
if (info.menuItemId === "summarizeText") {
const selectedText = info.selectionText;

console.log("Selected Text: " + selectedText);

getSummary(selectedText, (data) => {
chrome.scripting.executeScript({
target: { tabId: tab.id },
func: (summary) => {
const summaryElement = document.createElement("div");
summaryElement.style.position = "fixed";
summaryElement.style.bottom = "10px";
summaryElement.style.right = "10px";
summaryElement.style.backgroundColor = "white";
summaryElement.style.border = "1px solid black";
summaryElement.style.padding = "10px";
summaryElement.style.zIndex = 10000;
summaryElement.innerText = summary;
document.body.appendChild(summaryElement);
},
args: [data.message],
});
});
}
});

chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
if (request.action === "summarizeEmail") {
const selectedText = request.emailContent;

console.log("Selected Email Content: " + selectedText);

getSummary(selectedText, (data) => {
sendResponse({ summary: data.message });
});

return true; // Will respond asynchronously.
}
});

Explanation on whats happening,

  1. Listen to the gmail page for the page loads, Once the page is loaded invoke the injectAISummaryButton in the content.js
  2. “AI Summary button will be injected at the top right corner of the page”
  3. A context menu is also added when a text is selected. The click event i registered in the popup.js.
  4. Upon selecting the text in any web page and clicking on the context menu. or clicking on the “AI Summary” on gmail page will send the data of the email over the api and show the summarized response to the user.

Refer the full chrome extension plugin at GitHub

Browser extension sample – Chrome/Edge – HttpRequestViewer

Browser Extensions

The Evolution of Browser Extensions: From Web Customization to Advanced Development Tools – Part 2
We discussed about The evolution of the Browser extensions in the previous post. Lets quick learn how to create a Chrome/Edge/Firefox extension. I have mentioned “Advanced development tools” in the title, but never got chance to explore those capabilities earlier. We will create a simple extension to explore the power of it.

Creating a browser extension has never been easier, thanks to the comprehensive documentation and support provided by browser vendors. Below, we’ll walk through the steps to create a simple extension for both Chrome and Microsoft Edge using Manifest V3. We will use this tool to print the list of HTTP requests that are fired in a given browser and list it in the page.

Basics of extensions:

Manifests – A manifest is a JSON file that contains metadata about a browser extension, such as its name, version, permissions, and the files it uses. It serves as the blueprint for the extension, informing the browser about the extension’s capabilities and how it should be loaded.

Key Components of a Manifest File:

Here are the key components typically found in a Manifest V3 file:

1. Manifest Version: There are different versions of the manifest file, with Manifest V3 being the latest and most widely adopted version. Manifest V3 introduces several changes aimed at improving security, privacy, and performance with lot of controversies around it. Read more about the controversies at Ghostery.
2. Name and Version: These fields define the name and version of the extension. Choose a unique name and version. An excellent guide of version semantics is available here.
3. Description: A short description of the extension’s functionality.
4. Action: Defines the default popup and icon for the browser action (e.g., toolbar button).
5. Background: Specifies the background script that runs in the background and can handle events like network requests and alarms.
6. Content Scripts: Defines scripts and stylesheets to be injected into matching web pages.
7. Permissions: Lists the permissions the extension needs to operate, such as access to tabs, storage, and specific websites.
8. Icons: Specifies the icons for the extension in different sizes. For this post I created a simple icon using Microsoft Designer. I gave a simple prompt with the description above and I got the below image. Extension requires different sizes for showing it in different places. I used Chrome Extension Icon Generator and generated different sizes as needed.

Β  Β  Β 

9. Web Accessible Resources: Defines which resources can be accessed by web pages.

Create a project structure as follows:

HttpRequestViewer/
|-- manifest.json
|-- popup.html
|-- popup.js
|-- background.js
|-- history.html
|-- history.js
|-- popup.css
|-- styles.css
|-- icons/
    |-- icon.png
    |-- icon16.png
    |-- icon32.png
    |-- icon48.png
    |-- icon128.png

Manifest.json

{
  "name": "API Request Recorder",
  "description": "Extension to record all the HTTP request from a webpage.",
  "version": "0.0.1",
  "manifest_version": 3,
  "host_permissions": [""],
  "permissions": ["activeTab", "webRequest", "storage"],
  "action": {
    "default_popup": "popup.html",
    "default_icon": "icons/icon.png"
  },
  "background": {
    "service_worker": "background.js"
  },
  "icons": {
    "16": "icons/icon16.png",
    "32": "icons/icon32.png",
    "48": "icons/icon48.png",
    "128": "icons/icon128.png"
  },
  "content_security_policy": {
    "extension_pages": "script-src 'self'; object-src 'self';"
  },
  "web_accessible_resources": [{ "resources": ["images/*.png"], "matches": ["https://*/*"] }]
}

popup.html
We have two options with the extension.

1. A button with record option to start recording all the HTTP requests
2. Link to view the history of HTTP Requests recorded

<!DOCTYPE html>
<html>
  <head>
    <title>API Request Recorder</title>

    <link rel="stylesheet" href="popup.css" />
  </head>
  <body>
    <div class="heading">
      <img class="logo" src="icons/icon48.png" />
      <h1>API Request Recorder</h1>
    </div>
    <button id="startStopRecord">Record</button>

    <div class="button-group">
      <a href="#" id="history">View Requests</a>
    </div>

    <script src="popup.js"></script>
  </body>
</html>

popup.js
Two event listeners are registered for recording (with start / stop) and viewing history.
First event is used to send a message to the background.js, while the second one instructs chrome to open the history page in new tab.

document.getElementById("startStopRecord").addEventListener("click", () => {
  chrome.runtime.sendMessage({ action: "startStopRecord" });
});

document.getElementById("history").addEventListener("click", () => {
  chrome.tabs.create({ url: chrome.runtime.getURL("/history.html") });
});

history.html

 
<!DOCTYPE html>
<html>
  <head>
    <title>History</title>
    <link rel="stylesheet" href="styles.css" />
  </head>
  <body>
    <h1>History Page</h1>
    <table>
      <thead>
        <tr>
		  <th>Method</th>
          <th>URL</th>
          <th>Body</th>
        </tr>
      </thead>
      <tbody id="recorded-data-body">
        <!-- Data will be populated here -->
      </tbody>
    </table>
    <script src="history.js"></script>
  </body>
</html>

history.js
Requests background.js to “getRecordedData” and renders the result in the html format.

document.addEventListener("DOMContentLoaded", () => {
  chrome.runtime.sendMessage({ action: "getRecordedData" }, (response) => {
    const tableBody = document.getElementById("recorded-data-body");
    response.forEach((record) => {
      const row = document.createElement("tr");
      const urlCell = document.createElement("td");
      const methodCell = document.createElement("td");
      const bodyCell = document.createElement("td");

      urlCell.textContent = record.url;
      methodCell.textContent = record.method;
      bodyCell.textContent = record.body;

      row.appendChild(methodCell);
      row.appendChild(urlCell);
      row.appendChild(bodyCell);
      tableBody.appendChild(row);
    });
  });
});

background.js
Background JS works as a service worker for this extension, listening and handling events.
The background script does not have access to directly manipulate the user page content, but can post results back for the popup/history script to handle the cosmetic changes.

let isRecording = false;
let recordedDataList = [];

chrome.runtime.onMessage.addListener((message, sender, sendResponse) => {
  console.log("Obtined message: ", message);
  if (message.action === "startStopRecord") {
    if (isRecording) {
      isRecording = false;
      console.log("Recording stopped...");
      sendResponse({ recorder: { status: "stopped" } });
    } else {
      isRecording = true;
      console.log("Recording started...");
      sendResponse({ recorder: { status: "started" } });
    }
  } else if (message.action === "getRecordedData") {
    sendResponse(recordedDataList);
  } else {
    console.log("Unhandled action ...");
  }
});

chrome.webRequest.onBeforeRequest.addListener(
  (details) => {
    if (isRecording) {
      let requestBody = "";
      if (details.requestBody) {
        if (details.requestBody.formData) {
          requestBody = JSON.stringify(details.requestBody.formData);
        } else if (details.requestBody.raw) {
          requestBody = new TextDecoder().decode(new Uint8Array(details.requestBody.raw[0].bytes));
        }
      }
      recordedDataList.push({
        url: details.url,
        method: details.method,
        body: requestBody,
      });
      console.log("Recorded Request:", {
        url: details.url,
        method: details.method,
        body: requestBody,
      });
    }
  },
  { urls: [""] },
  ["requestBody"]
);

Lets load the Extension

All set, now lets load the extension and test it.

  • Open Chrome/Edge and go to chrome://extensions/ or edge://extensions/ based on your browser.
  • Enable “Developer mode” using the toggle in the top right corner.
  • Click “Load unpacked” and select the directory of your extension.

Load extensionupload extension

  • Your extension should now be loaded, and you can interact with it using the popup.
  • When you click the “Record” button, it will start logging API requests to the console.

  • Click the “Record” button again and hit the “View requests” link in the popup to view the history of APIs.

I have a sample page (https://itechgenie.com/demos/apitesting/index.html) with 4 API calls, which also loads images based on the API responses. You could see all the API requests that is fired from the page including the JS, CSS, Images and API calls.


Now its up to the developers imagination to build the extension to handle these APIs request and response data and give different experience.

Code is available in GitHub at HttpRequestViewer

Installing Oracle JDK in Amazon AWS EC2 Ubuntu

Lately I tried to install Oracle JDK in one of my Ubuntu servers on Amazon EC2 instance. Unfortunately the inbuilt installers support the installation of OpenJDK.

For some requirements, I was in need of installing a specific version of JDK and test my application, you could get the older version from Oracle Site. I used the following script from one of the blogs, hope it helps someone.

#!/usr/bin/env bash
wget -O 'jdk-7u80-linux-x64.tar.gz' --no-cookies --no-check-certificate --header 'Cookie:gpw_e24=http://www.oracle.com; oraclelicense=accept-securebackup-cookie' 'http://download.oracle.com/otn-pub/java/jdk/7u80-b15/jdk-7u80-linux-x64.tar.gz'
tar -xvf jdk-7u80-linux-x64.tar.gz
sudo mkdir /usr/lib/jvm
sudo mv ./jdk1.7* /usr/lib/jvm/jdk1.7.0
sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.7.0/bin/java" 1
sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.7.0/bin/javac" 1
sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jdk1.7.0/bin/javaws" 1
sudo chmod a+x /usr/bin/java
sudo chmod a+x /usr/bin/javac
sudo chmod a+x /usr/bin/javaws

The Key here is Oracle need you to accept the license terms before using the any version of Oracle JDK. You could do the same from the scripting by just adding --no-cookies --no-check-certificate --header 'Cookie:gpw_e24=http://www.oracle.com; oraclelicense=accept-securebackup-cookie' params to the WGET.

Alternative, you could download the installers/zip files from external CDNs, like REUCON, move it to EC2 instance through SFTP and install it.

Errors running builder ‘JavaScript Validator’ on project

I got this annoying exception on every auto build of my Dynamic Web Project.

“Errors occurred during the build.
Errors running builder ‘JavaScript Validator’ on project ‘GenieAlert’.
java.lang.NullPointerException”

I was trying to stop the validation of Javascript in the Eclipse properties with the following options,

Windows -> Preferences -> Validation -> Client-side JavaScript Validator -> Checked Manual & Unchecked Build.

This option didn’t work out, then I realized that it happens only on the Build time. The following option came in handy to do that.

Project -> Properties -> Builders -> Unchecked ‘Javascript Validator’

Sometimes when you try to run the Web projects from eclipse on servers these JavaScript validation stops the deployment of the project on servers stating “JavaScript Validation Exception found”. I hope the above solutions will help in those situations too.

Create Web Services using Axis Java2WSDL, WSDL2Java and Eclipse for all Servers manually – Part 2

With all the basic configurations done as specified in the last Article we continue to develop the Business logic.

  1. Create a class named Calculator.java, place four public methods add, subtract, multiply and delete and place the appropriate logics in it.
    package com.itechgenie.services.impl;
    public class Calculator {
    	public int add(int a, int b) {
    		return a+b  ;
    	}
    
    	public int subtract(int a, int b) {
    		return a-b ;
    	}
    
    	public int multiply(int a, int b) {
    		return a * b ;
    	}
    
    	public int divide(int a, int b) throws ArithmeticException {
    		return a /b ;
    	}
    }
  2. This is the class that has to be exposed as the Web Service and we write the funky TestRunner.java class to do all out operations like creating WSDL file, creating stub file etc.
  3. Generate WSDL file using Java2WSDL: Axis has a tool called Java2WSDL, which generates a WSDL file for a web service using a Java class. Java2WSDL file takes the following arguments.
    1. o – name for WSDL file -> calculator.wsdl
    2. n – target namespace -> mx:com.itechgenie.services.Calculator
    3. l – url of web service -> http://<host:port>/<Project-Name>/services/calculator

    Summing up the above arguments the following command line arguments is created.

    String java2wsdlArgs[] = {"-ocalculator.wsdl", "-nmx:com.itechgenie.services.Calculator", "-v", "-lhttp://localhost:8080/axis/services/calculator", "com.itechgenie.services.Calculator"} ;

    Read this Article on how to run the command line java tools from Eclipse.
    You can run the Java2WSDL as follows in the TestRunner class. Naah, don’t ask how, just put the following lines the main method and press CTRL + F11.

    try {
    	Java2WSDL.main(java2wsdlArgs) ;
    } catch (Exception e) {
    	e.printStackTrace() ;
    }

    The Java2WSDL class has the System.exit(0); method called from inside. So lines after the Java2WSDL will not be executed. To get the other arguments supported you can just run Java2WSDL.main(new String[0]) ;. This will display all the arguments supported by Java2WSDL Utility and this works for other utilities also.
    After running this Utility you will find the calculator.wsdl file created in the root folder of the Project.

  4. Generate Server side and Client side codes using WSDL2Java: WSDL2Java is another tools provided by the AXIS, which can generate server side and client side Java classes using a WSDL file. These classes are needed for deploying the web service and also for accessing the web service using a Java client. This tool expects the following argument which includes the WSDL file generated in the last step.
    1. o – output folder -> src
    2. p – package for generated classes -> mx:com.itechgenie.services. generated
    3. s – generate server side classes as well
    4. *.wsdl – WSDL file of any web service

    Summing up the above arguments the following command line arguments is created.

    String wsdl2javaArgs[] = {"-osrc", "-pcom.itechgenie.generated.service", "-s", "calculator.wsdl", "-v"} ;

    Read this Article on how to run the command line java tools from Eclipse.
    Now run the WSDL2Java utility as follows.

    try {
    	WSDL2Java.main(wsdl2javaArgs) ;
    } catch (Exception e) {
    	e.printStackTrace() ;
    }

    Once the above command is run, Just refresh the project in eclipse, you will find the following files created inside the β€œcom.itechgenie.generated.service” package.

    1. Calculator.java
    2. CalculatorService.java
    3. CalculatorServiceLocator.java
    4. CalculatorSoapBindingImpl.java
    5. CalculatorSoapBindingStub.java
    6. deploy.wsdd
    7. undeploy.wsdd

    The above files can be used in both Server and Clients side as Skeleton (CalculatorSoapBindingImpl.java) and the Stub (CalculatorSoapBindingStub.java) respectively.

  5. Binding the business logic with the Skeleton: Take the Skeleton file and you will find the exact methods that were available in our Business logic class (Calculator.java.).
    Create a instance of the Business class and invoke the appropriate method from the skeleton as follows (Find the lines highlighted in yellow.).

    package com.itechgenie.generated.service;
    
    import com.itechgenie.services.Calculator;
    
    public class CalculatorSoapBindingImpl implements com.itechgenie.generated.service.Calculator{
    
    	Calculator calculatorImpl = new Calculator() ;
    
        public int add(int in0, int in1) throws java.rmi.RemoteException {
        	return calculatorImpl.add(in0, in1) ;
        }
    
        public int subtract(int in0, int in1) throws java.rmi.RemoteException {
        	return calculatorImpl.subtract(in0, in1) ;
        }
    
        public int divide(int in0, int in1) throws java.rmi.RemoteException {
        	return calculatorImpl.divide(in0, in1) ;
        }
    
        public int multiply(int in0, int in1) throws java.rmi.RemoteException {
        	return calculatorImpl.multiply(in0, in1) ;
        }
    
    }

    That’s it; we are now done with the development part of the Web Service. All we have to do is to configure to make the service up and running.

  6. Last configurations to make our service available: Open the server-config.wsdd file inside the WEB-INF folder. You will find the following lines.
      <!--  Your Service from the deploy.wsdd file - Starts here -->
    
      <!--  Your Service from the deploy.wsdd file - Ends here -->

    Keep the file aside and open the deploy.wsdd from the WSDL2Java generated files. Copy the <service> … </service> tag completely and paste in between the comments said above.

  7. Conclusion: You can follow the steps 6 to 11 and create as many services as you want and paste them in the server-config.wsdd.
    With this the configurations for the Web Service is over. Export the Project as a War and deploy it in Web Server and point to the URL http://<host:port>/<Project-Name>/services
  8. This URL should display all the services generated from steps 6 to 11 with the links the WSDL files for the above.

    Click here to download the sample project.

Create Web Services using Axis Java2WSDL, WSDL2Java and Eclipse for all Servers manually – Part 1

There a lot of Web Service implementations available in market. The most widely used among them is the Axis way of implementation. There are a lot of Examples available in the web to create expose, consume the Web services using the Axis packages. But it is not feasible to work get the Axis complete packages inside corporate offices all of a sudden and yes I faced the same situation.

After some investment of time I found some funky stuff in web to create a Web Service with just a couple of jars in hand and off-course with the help of Eclipse.

Prerequisites:

  1. Eclipse, any version should be ok, but I was using the Eclipse Indigo with Ant installed in it.
  2. The set of jars needed. Jars are included in the Project sample.
    1. axis.jar
    2. commons-discovery-0.2.jar
    3. commons-logging.jar
    4. jaxrpc.jar
    5. log4j-1.2.15.jar
    6. saaj.jar
    7. wsdl4j.jar
    8. The sample web.xml, server-config.wsdd (These will be used later in the development steps).

Steps to develop Web Services:

  1. Create a Dynamic Web Project β€œSampleWebService” in Eclipse.
  2. Place the above said jars in the WEB-INF/jars folder.
  3. Open the Web.xml file and copy the following contents into it somewhere between tags. These contents are available in the sample attached.
      <servlet>
        <display-name>Apache-Axis Servlet</display-name>
        <servlet-name>AxisServlet</servlet-name>
        <servlet-class>org.apache.axis.transport.http.AxisServlet</servlet-class>
      </servlet>
      <servlet-mapping>
        <servlet-name>AxisServlet</servlet-name>
        <url-pattern>/servlet/AxisServlet</url-pattern>
      </servlet-mapping>
      <servlet-mapping>
        <servlet-name>AxisServlet</servlet-name>
        <url-pattern>*.jws</url-pattern>
      </servlet-mapping>
      <servlet-mapping>
        <servlet-name>AxisServlet</servlet-name>
        <url-pattern>/services/*</url-pattern>
      </servlet-mapping>
      <servlet>
        <display-name>Axis Admin Servlet</display-name>
        <servlet-name>AdminServlet</servlet-name>
        <servlet-class>org.apache.axis.transport.http.AdminServlet</servlet-class>
        <load-on-startup>100</load-on-startup>
      </servlet>
      <servlet-mapping>
        <servlet-name>AdminServlet</servlet-name>
        <url-pattern>/servlet/AdminServlet</url-pattern>
      </servlet-mapping>
  4. Copy the server-config.wsdd next to web.xml file. We will reuse this file once again after complete the business logic of the server.
  5. Now the basic configurations are complete, we have to develop the business logic for the Web Service. In my example I have taken the Old school Calculator sample.
    Click here to go to the next Part of this article.

Dynamically adding HTML components using JavaScript

At times there will be requirements to dynamically add a scetion of HTML multiple times in UI. This can be achived by using JavaScript. One of the implementation for such a scenario is addressed in the following example.
Requirement:
In thie example, there was a need to add a row of three Text boxes, select box to be added dynamically to a page on a button click. Validation has to be done to the dynamically added rows.

Design & Usage:

The design of the sample goes like this.

There will be a container to hold the control items, like button to add the component, field to hold the counter. Following the control box an empty container to hold the dynamically added components. Another container just below this, to hold the contents that has to be created dynamically. The contents has to be formulated carefully with the layouts and identifications. To achive the easier identificaiton of the elements inside the components an auto generated ‘ID’ is added to all the elements on the go. To achive this a delimiter is added to all the elements. Here the delimiter used is “ADDIDHERE”. This will be replaced by the counter variable on the go.
Source:
View Sample page.
Download Sample.
Screenshot:

DynaComp

Simple methods to Create, Read and Delete Cookies using JavaScript

Usage:

1. createCookie(name,value,days) – Void function.

name – Name for the cookie to be created
value – Value of the cookie
days(INT – Non Madatory) – Number of days to keep the cookie in browser. If not specified the cookie will expire as soon as soon the browser is closed.

2. readCookie(name) – Returns the value of the cookie.

name – Name given at the time of creation

3. eraseCookie(name) – Void function.

Dependency – createCookie method should be in place.

function createCookie(name,value,days) {
if (days) {
var date = new Date();
date.setTime(date.getTime()+(days*24*60*60*1000));
var expires = "; expires="+date.toGMTString();
}
else var expires = "";
document.cookie = name+"="+value+expires+"; path=/";
}
function readCookie(name) {
var nameEQ = name + "=";
var ca = document.cookie.split(';');
for(var i=0;i &lt; ca.length;i++) {
var c = ca[i];
while (c.charAt(0)==' ') c = c.substring(1,c.length);
if (c.indexOf(nameEQ) == 0)
return c.substring(nameEQ.length,c.length);
}
return null;
}
function eraseCookie(name) {
createCookie(name,"",-1);
}