Skip to main content
The NmpServer class is the heart of the Data Node. It represents the host infrastructure that securely orchestrates incoming Logic-on-Origin intent from remote Agents. Instead of writing custom execution environments to manage WebAssembly, isolation, and networking, you define straightforward JavaScript/TypeScript functions. The SDK abstracts the complex sandboxing automatically.

Initialization and Configuration

You create a Data Node by telling the Mesh who you are and providing configuration properties, such as data security controls.
import { NmpServer, PII_PATTERNS } from "@nekzus/neural-mesh";

const server = new NmpServer(
  {
    name: "EnterpriseDatabaseNode",
    version: "3.2.0",
  },
  {
    // Scales decryption & WASI sandboxing across all CPU cores instantly
    workerPool: { enabled: true, maxThreads: 16 },
    // Strict security configuration and data leak prevention
    security: {
      piiPatterns: [PII_PATTERNS.EMAIL, PII_PATTERNS.IP_ADDRESS],
      forbiddenKeys: ["password", "ssn", "secret_token"]
    }
  }
);

Configuration Options

When instantiating an NmpServer, the second parameter allows you to define advanced characteristics of the node:
  • security.forbiddenKeys (Egress Filter): An array of strings. If the model attempts to return a JSON containing any of these keys (e.g., "password"), the Egress Filter will destroy the response before it leaves the server.
  • security.piiPatterns (Sanitization): Predefined (PII_PATTERNS) or custom Regular Expressions that scrutinize text substrings. If an agent attempts to exfiltrate an email in a free text block, the engine intercepts it.
  • Dynamic Return Structure (Auto-i18n): No manual configuration required. The NmpServer automatically injects Dynamic Return Structure directives into payloads. This forces the AI to respect and return JSON keys exactly in the same native language the client used when making the request, eliminating the need for internationalization (i18n) libraries on your frontend.

Creating Capabilities (Tools)

In MCP, these are called Tools. In NMP, we refer to them as Capabilities. To the developer, defining a Capability in NMP looks virtually identical to defining an MCP Tool. You provide a name, a description, a Zod schema for type-safety, and an execution handler.
import { z } from "zod";

server.tool(
  "analyze_employee_data",
  "Analyzes sensitive payroll information inside the Sandbox.",
  { department: z.string() },
  async ({ department }) => {
    // The Agent's injected logic stops here.
    // The Data Node executes the local request under its own permissions.
    const result = await database.query(department);

    return {
      content: [{ type: "text", text: `Payroll evaluated: ${result.summary}` }]
    };
  }
);

How this differs from MCP

If the code looks exactly the same, what makes it Neural Mesh?
  1. No Polling: The Data Node announces the analyze_employee_data schema cryptographically over the Kademlia DHT. The Agent instantly knows the schema exists without wasting a network request asking for listTools().
  2. Binary Transport: When the Agent invokes this Tool, the parameters (e.g., department: "HR") are compressed into pure Protobuf bytes, skipping JSON entirely.

Exposing Data Schemas (Resources)

NMP Servers can also expose Resources (static data, schemas, or descriptions) that Agents can discover to understand the shape of the data they will analyze before they inject their logic. This enables true Zero-Shot autonomy.
server.resource(
  "nmp://schema/employee_records",
  "The exact JSON schema representing the employee databases.",
  "application/json",
  JSON.stringify(EmployeeSchema)
);
These resources are flawlessly bridged to LLMs via standard MCP resources/list and resources/read methods.

Advanced AI Planning (Prompts & Autonomy)

Just like MCP, Neural Mesh Protocol natively supports Prompts—templated conversational instructions that help Agents structure their tasks before executing Logic-on-Origin.
server.prompt(
  "analyze_codebase",
  "Instructs the agent on how to traverse the codebase securely.",
  [
    { name: "language", description: "Target language (e.g., rust, ts)", required: true }
  ],
  (request) => ({
    description: "Codebase Analysis Prompts",
    messages: [
      { role: "user", content: { type: "text", text: `Analyze the ${request.arguments?.language} codebase securely.`} }
    ]
  })
);

Zero-Shot Autonomy (The Blind Analyst)

NMP ships with an industrial Master System Prompt designed to instruct standard LLMs (like Claude or GPT-4) on how to generate .wasm or JavaScript logic dynamically without hallucinating or breaking the Data Node Sandbox. You can activate this structural instruction explicitly:
server.enableZeroShotAutonomy();
This automatically registers the nmp_blind_analyst intelligent prompt into the Mesh, dynamically injecting your Data Dictionary into it as well.

Security & Memory Management

When a Data Node receives a .wasm or .js payload, the Guardian Sentinel parses its Abstract Syntax Tree (AST) to ensure absolute security. This rigorous AST evaluation is aggressively cached in memory O(1) for blazing-fast subsequent executions of the exact same logic payload. If you are hot-deploying Zero-Day security patches on the host or need to forcefully invalidate the execution memory, you can purge the AST cache manually:
server.clearAstCache();

Bridging Legacy MCP Servers

NMP’s primary goal is rapid industry adoption. If you have spent months building a standard Model Context Protocol (MCP) server, you do not need to rewrite it to join the Neural Mesh. The @nekzus/neural-mesh ships with the NmpMcpBridge. This powerful adapter intercepts incoming NMP binary mesh requests, translates them into standard JSON-RPC 2.0 locally, forwards them to your unmodified MCP Server via stdio or SSE, and repackages the standard MCP response back into the ultra-fast P2P network.
import { NmpMcpBridge } from "@nekzus/neural-mesh/bridge";
import { Server } from "@modelcontextprotocol/sdk/server/index.js";

// Your standard legacy MCP implementation
const mcpServer = new Server({ name: "LegacyWeatherApp", version: "1.0" });

// Wrap it in the NMP Bridge
const bridge = new NmpMcpBridge(mcpServer, {
  publishToMesh: true,
  meshIdentity: "WeatherNode_01"
});

// Start the bridge
await bridge.connect();
Your legacy application is now fully accessible to Web3, decentralized, Post-Quantum AI agents around the globe without changing a single line of your original MCP logic.

Starting the Server

Once your tools, watchdogs, and bridges are configured, you launch the Data Node by binding it to the network:
await server.connect();
console.log(`Neural Node online. Listening for incoming Wasms...`);