Mocklantis Desktop App Guide

Learn how to create mock servers, configure endpoints, and accelerate your development workflow

πŸ“¦Installation

Get Mocklantis up and running on your machine

1

Download the application

Visit the download page and select the version for your operating system (macOS, Windows, or Linux).

2

Install and launch

Follow the installation instructions for your platform. Once installed, launch Mocklantis from your applications folder.

3

First launch

On first launch, you'll see an empty workspace. You're ready to create your first mock server!

Getting Started

Mocklantis allows you to create mock servers in two ways: Default (blank server) or Import from OpenAPI (auto-generated from spec). Both methods are accessed through the same interface.

1

Open Create Server Modal

In the left sidebar, locate the Mock Servers section at the top. Click the + button next to "Mock Servers" to open the create server modal.

πŸ’‘ The + button is always visible at the top of the sidebar, even when you have existing servers.

2

Choose Your Method

The modal has two tabs at the top. Select the method that fits your needs:

πŸ“‹ Default (Blank Server)

Create an empty server and manually add endpoints one by one. Perfect for:

  • Building a mock API from scratch
  • Small projects with few endpoints
  • Custom test scenarios
  • Learning and experimenting

Required Fields:

Server Name

Example: "My API", "Test Server", "Auth Service"

A friendly name to identify your server.

Port

Example: 3000, 8080, 4200

The port your mock server will listen on (1-65535). Must be unique.

⚠️ Port Validation: If the port is already in use by another server, you'll see a warning and cannot proceed.

πŸš€ Import from OpenAPI

Auto-generate a complete mock server from an OpenAPI/Swagger specification. Perfect for:

  • Mocking existing APIs with documentation
  • Large projects with many endpoints
  • Team collaboration with shared specs
  • Quick prototyping from design specs

Three Import Methods:

1. URL

https://petstore.swagger.io/v2/swagger.json

Paste a public URL to an OpenAPI spec (JSON or YAML).

2. File

Upload a local OpenAPI file (.json, .yaml, .yml) from your computer.

3. Paste

Click "Open Editor" and paste your OpenAPI spec directly (JSON or YAML format).

Optional Fields:

Server Name (Optional)

Custom name for your server. If not provided, name is extracted from OpenAPI spec.

Port (Optional)

Custom port. If not provided, an available port is automatically assigned.

✨ Auto-Generated: All endpoints, paths, methods, parameters, request/response schemas, and example responses are automatically created from the spec!

3

Click "New Mock Server"

After filling in the required fields, click the "New Mock Server" button at the bottom of the modal.

Default Method: The server is created instantly and appears in the sidebar. It's not started automatically - you must add at least one endpoint first, then the "Start Server" button becomes active.

OpenAPI Import: Processing may take a few seconds depending on spec size. The server is created with all endpoints and automatically starts - ready to use immediately!

4

Add Endpoints (Default servers only)

If you created a Default server, it starts empty. You need to add endpoints manually.

How to Add Endpoints:

  1. Click your server in the sidebar to select it
  2. In the main area, click the "+ New Endpoint" button (or the dropdown arrow next to it)
  3. Choose endpoint type:
    • HTTP - REST API endpoints (GET, POST, PUT, DELETE, etc.)
    • WebSocket - Real-time bidirectional WebSocket endpoints
    • SSE - Server-Sent Events for real-time streaming
    • Webhook - Callback endpoints that trigger HTTP requests to external URLs
  4. Configure your endpoint (path, method, response, etc.)
  5. Changes auto-save - no need to click save!

⚠️ Important: Servers cannot be started without at least one endpoint. Add an endpoint before clicking "Start Server".

5

Start Your Server

Once your server has at least one endpoint, click the "Start Server" button in the top section.

βœ… Server Running

  • Status indicator turns green and pulsing
  • Button changes to "Stop Server"
  • Your mock API is now live and ready to receive requests!

πŸ’‘ Hot Reload: You can edit endpoints while the server is running. Changes take effect immediately without restarting!

6

Test Your Mock API

Your mock server is now running! Test it with any HTTP client:

Example with cURL:

curl http://localhost:3000/api/users

Other Tools:

  • Postman / Insomnia - REST client applications
  • Your App - Point your frontend/mobile app to localhost
  • Browser - Visit URLs directly for GET requests
  • HTTPie, wget, fetch - Command-line tools

πŸ“Š Real-time Logging

Open the Logs panel (bottom of the screen) to see all incoming requests in real-time. You can inspect request headers, body, query params, and responses in a Postman-style accordion view!

Proxy

What is Proxy?

Proxy feature allows you to forward unmatched requests to a real backend server. This is incredibly useful when you want to:

  • Mock only specific endpoints while keeping others live
  • Test your app with a mix of mock and real data
  • Gradually migrate from a real API to mocks (or vice versa)
  • Debug specific endpoints without affecting the entire API

How Does It Work?

πŸ”΄ Proxy OFF (Default)

When a request comes to an undefined endpoint:

Request: GET /users/123
Response: {"error": "No mock matched"}

🟒 Proxy ON

When a request comes to an undefined endpoint:

Request: GET /users/123
β†’ Forwarded to: https://api.example.com/users/123
Response: (whatever the target API returns)

ℹ️ If the target API also doesn't have this endpoint, it will return its own 404 error.

How to Use Proxy

Step 1: Open Proxy Settings

Select your server, then click the Settings button in the top-right corner of the server panel.

In the dropdown menu, you'll see "Proxy Settings" section at the top. Click Proxy to open the settings modal.

πŸ’‘ If proxy is already enabled, you'll see a green ON badge next to the Proxy menu item.

Step 2: Enable Proxy

In the Proxy Settings modal, toggle the Enable Proxy switch to ON.

Step 3: Enter Target URL

Enter the URL of your real backend server. For example:

β€’ https://api.production.com
β€’ http://localhost:8080
β€’ https://staging.myapp.io

Step 4: Save

Click Save. Changes take effect immediately - no server restart required!

Proxy Status in Sidebar

When proxy is enabled, you can easily see its status in the sidebar under your server name:

πŸ“My API Server:3000
Proxy ON

This gives you a quick overview of which servers have proxy enabled without opening settings.

⚑ Dynamic Routing - No Restart Needed!

Mocklantis uses dynamic routing for proxy settings. This means:

  • Instant activation: Proxy settings apply immediately after saving
  • No downtime: Your mock server keeps running while settings change
  • Hot-swap: Switch between proxy targets without interrupting active connections
  • Toggle on-the-fly: Enable/disable proxy anytime during testing

πŸ“š Real-World Example

Scenario: Testing a Login Flow

You're building a mobile app and want to test the login flow. You want to mock the /auth/login endpoint to always return success, but keep all other endpoints (like /user/profile, /products) connected to the real backend.

1. Setup Mock Server
Server Port: 3000
Mock Endpoint: POST /auth/login
Response: {"success": true, "token": "mock-jwt-token"}
2. Enable Proxy
Proxy: ON
Target URL: https://api.myapp.com
3. Start Server & Test
Request: POST /auth/login
β†’ Returns mock response (always success)
Request: GET /user/profile
β†’ Forwarded to https://api.myapp.com/user/profile
β†’ Returns real user data
Request: GET /products
β†’ Forwarded to https://api.myapp.com/products
β†’ Returns real products

πŸ“Š Request Flow

Incoming Request
      β”‚
      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Check Method   β”‚
β”‚  (GET, POST...) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚
         β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     No Match      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Check Path     β”‚ ─────────────────▢│  Proxy Enabled? β”‚
β”‚  Variables      β”‚                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜                            β”‚
         β”‚                           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚                           β”‚                   β”‚
         β–Ό                          Yes                  No
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                  β”‚                   β”‚
β”‚  Check Query    β”‚                  β–Ό                   β–Ό
β”‚  Parameters     β”‚         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚ Forward to    β”‚   β”‚ Return 404    β”‚
         β”‚                  β”‚ Target URL    β”‚   β”‚ "No mock      β”‚
         β–Ό                  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚  matched"     β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚  Match Found!   β”‚
β”‚  Return Mock    β”‚
β”‚  Response       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

⚠️ Important Notes

  • β€’ Defined endpoints always take priority: Mock responses are returned first, proxy is only used for unmatched requests.
  • β€’ Target URL must be accessible: Make sure the proxy target URL is reachable from your machine.
  • β€’ Headers are forwarded: Request headers (except Host) are automatically forwarded to the target server.
  • β€’ Response is passed through: Whatever the target server returns (including errors) is sent back to the client as-is.
  • β€’ Proxy requests are logged: All proxied requests appear in the Logs panel with the target URL info.

Recording

What is Recording?

Recording allows you to automatically create mock endpoints by capturing real API responses. Think of it as a "learning mode" - Mocklantis learns from a real API and creates mocks for you.

Use case: You have an existing API and want to quickly create mocks for it. Instead of manually creating each endpoint, just enable Recording, make requests, and let Mocklantis do the work.

How Recording Works

1

Enable Recording with a Target URL

Enter the base URL of the real API you want to record from (e.g., https://api.example.com)

2

Send Requests to Your Mock Server

Make HTTP requests to your mock server as you normally would

3

Mocklantis Forwards Unmatched Requests

If no existing endpoint matches, the request is forwarded to the target API

4

Response is Captured & Endpoint Created

The real API response is captured and a new mock endpoint is automatically created

5

Response Returned to Client

The response is also returned to your application - no interruption to your workflow

Your App                  Mocklantis                 Real API
   |                          |                           |
   |-- GET /users ----------->|                           |
   |                          |-- (no match, forward) --->|
   |                          |<---- { users: [...] } ----|
   |                          |                           |
   |                    [Create endpoint]                 |
   |                    GET /users -> 200                 |
   |                    body: { users: [...] }            |
   |                          |                           |
   |<---- { users: [...] } ---|                           |
   |                          |                           |
   |-- GET /users ----------->|                           |
   |<---- { users: [...] } ---| (now uses mock!)          |

How to Start Recording

  1. Start your server - Recording requires a running server
  2. Open Settings dropdown - Click the gear icon (βš™οΈ) in the server header
  3. Click "Record" - Opens the Recording modal
  4. Enter Target API URL - The base URL of the API you want to record (e.g., https://api.example.com)
  5. (Optional) Configure Authentication - If the target API requires auth
  6. Click "Start Recording" - You'll see a red recording indicator

Note: The server must be running before you can start recording. If the "Record" option is disabled, start your server first.

What Gets Recorded

When a request is recorded, Mocklantis captures the full request and response context:

Request Context

  • β€’ HTTP Method (GET, POST, PUT, DELETE, PATCH)
  • β€’ Path (including query string)
  • β€’ Request Headers
  • β€’ Query Parameters
  • β€’ Request Body (for POST/PUT/PATCH)

Response Context

  • β€’ Status Code (200, 201, 404, etc.)
  • β€’ Response Body
  • β€’ Response Headers
  • β€’ Content-Type (JSON, XML, String)

Smart Matching: Recorded endpoints include request matching. If you recorded POST /users with a specific body, the mock will only match requests with the same body.

Authentication

If the target API requires authentication, you can configure it in the Recording modal. Mocklantis supports:

Basic Auth

Username and password authentication

Authorization: Basic base64(username:password)

Bearer Token

JWT or OAuth2 token authentication

Authorization: Bearer your-token-here

API Key

API key in header or query parameter

Header: X-API-Key: your-api-keyQuery: ?api_key=your-api-key

Recording vs Proxy

Both Recording and Proxy forward requests to a target API, but they serve different purposes:

FeatureRecordingProxy
PurposeCreate mocks from real APIForward unmatched requests
Creates EndpointsYesNo
Persists DataYes (saves endpoints)No (just forwards)
Use CaseInitial mock setupHybrid mock/real mode
PriorityTakes priority over ProxyOnly if Recording is off

Workflow: Use Recording to quickly create your initial mocks, then switch to Proxy mode for ongoing development where you want some endpoints mocked and others to hit the real API.

Limitations

  • ⚠️
    Response Size Limit: Responses larger than 50KB are skipped. Large responses (images, files, large JSON) won't be recorded.
  • ⚠️
    Duplicate Detection: If an endpoint with the same method and path already exists, it won't be recorded again. This prevents duplicates.
  • ⚠️
    HTTP Only: Recording works for HTTP/REST endpoints. WebSocket and SSE connections are not recorded.

Tips for Best Results

πŸ’‘

Start with an empty server: Create a new server specifically for recording to keep things organized.

πŸ’‘

Use your test suite: Run your integration tests against the mock server while recording. This captures all the endpoints your app uses.

πŸ’‘

Review and edit: After recording, review the created endpoints. You may want to simplify request matching or add response variations.

πŸ’‘

Watch the counter: The Recording modal shows how many endpoints have been created and any skipped requests.

Endpoints

Mocklantis supports two types of endpoints, each designed for different use cases. Create as many endpoints as you need - there are no limits!

🌐

HTTP/REST Endpoints

Traditional HTTP endpoints supporting all standard methods. Perfect for mocking REST APIs, webhooks, and any HTTP-based service.

Supported HTTP Methods

GETPOSTPUTDELETEPATCHHEADOPTIONS

Rich Features

  • βœ“Path Variables: Typed parameters like /users/{id:number}
  • βœ“Query Parameters: Type validation and exact value matching
  • βœ“Request Matching: Match by headers, body, and query params
  • βœ“Dynamic Responses: Use {{random.*}} variables for realistic data
  • βœ“Custom Headers: Full control over request and response headers
  • βœ“Response Delay: Simulate network latency in milliseconds

πŸ’‘ REST Conventions? Optional!

While RESTful design is recommended for clean APIs, Mocklantis doesn't enforce it. You're free to structure your endpoints however you need:

  • βœ… RESTful: GET /users/123
  • βœ… RPC-style: POST /getUserById
  • βœ… Custom: GET /api/v1/fetch-user-data?id=123
  • βœ… Whatever works for your use case!

We're not strict - use the patterns that make sense for your project. Both REST purists and pragmatic developers are welcome here! 🀝

⚑

WebSocket Endpoints

Real-time bidirectional communication channels. Perfect for chat applications, live updates, notifications, and any real-time feature.

Four Powerful Modes

πŸ’¬
CONVERSATIONAL

Pattern-based request/response. Client sends message, server responds based on matching patterns.

πŸ“‘
STREAMING

Continuous data flow. Server sends messages at regular intervals automatically.

🎯
TRIGGERED_STREAMING

Start streaming on demand. Client sends trigger message, server starts sending stream.

Advanced Features

  • βœ“Message Patterns: Match incoming messages by exact, contains, regex, or JSON path
  • βœ“Lifecycle Events: Custom messages on connect/disconnect
  • βœ“Configurable Intervals: Control streaming frequency and timing
  • βœ“Client Limits: Control maximum connected clients

Quick Comparison

FeatureHTTP/RESTWebSocket
Connection TypeRequest-ResponseBidirectional, Persistent
Best ForCRUD operations, APIsReal-time updates, chat, live data
Server PushNoYes
OverheadHigher (new connection each request)Lower (single persistent connection)

πŸ“š Want to Learn More?

The sections above provide a quick overview of both endpoint types. For detailed guides with examples and best practices, continue reading below. We'll dive deep into HTTP/REST features, request matching, status codes, and much more!

Random Variables

Random variables allow you to generate dynamic, random data in your request and response bodies at runtime. Use the format {{random.type}} in your response body.

Example Request Body & Response Body:

{
  "id": "{{random.uuid}}",
  "email": "{{random.email}}",
  "name": "{{random.name}}",
  "age": {{random.number}},
  "ipAddress": "{{random.ip}}",
  "createdAt": "{{random.date}}"
}

Actual Request & Response (Example):

{
  "id": "a3d5f7b9-1234-5678-9abc-def012345678",
  "email": "[email protected]",
  "name": "Alice Johnson",
  "age": 7845,
  "ipAddress": "192.168.1.147",
  "createdAt": "2024-08-15"
}

⚠️ Important: Quotes for String Types

String types (uuid, email, name, ip, url, website, date, string, alphanumeric, custom) must be wrapped in quotes in JSON. Number types (number, double) and boolean should NOT have quotes.

βœ… Correct:"ip": "{{random.ip}}"
βœ… Correct:"age": {{random.number}}
❌ Wrong:"ip": {{random.ip}} (missing quotes)

Random Variable Types

TypeUsageExample OutputDefault Range/Limit
string{{random.string}}
{{random.string(30)}}
aBcDeFgHiJ10 chars (max 50)
uuid{{random.uuid}}550e8400-e29b-41d4...UUID v4 format
number{{random.number}}
{{random.number(1,100)}}
74820-999 (max 1,000,000)
double{{random.double}}
{{random.double(0.0,10.0)}}
342.780.0-100.0 (max 1,000,000, 2 decimals)
email{{random.email}}[email protected]RFC 2606 domains
name{{random.name}}John SmithFirst + Last name
boolean{{random.boolean}}truetrue or false
date{{random.date}}2024-11-03ISO format (last year)
phone{{random.phone}}+905123456789TR format
url{{random.url}}https://example.com/apiHTTPS URLs
ip{{random.ip}}192.168.1.147Private IP range
website{{random.website}}example.comDomain names only
alphanumeric{{random.alphanumeric}}
{{random.alphanumeric(20)}}
aB3dE9fG2h10 chars (max 50)
custom{{random.custom([A-Z]{3}\d{4})}}ABC1234Regex pattern (max 100)

Advanced Usage

Parameterized Random Types

String with custom length:

{{random.string(30)}}β†’ 30 character string

Number with range:

{{random.number(1,100)}}β†’ Number between 1 and 100

Double with range:

{{random.double(0.0,1.0)}}β†’ Decimal between 0.0 and 1.0

Custom Regex Patterns

✨ Supports Nested Parentheses!

Complex regex patterns with nested brackets like [A-Z]{3}\d{4} are fully supported.

Example Patterns:

{{random.custom([A-Z]{3}\d{4})}}β†’ ABC1234 (3 uppercase + 4 digits)
{{random.custom([a-f0-9]{32})}}β†’ MD5-like hash
{{random.custom(\d{3}-\d{2}-\d{4})}}β†’ 123-45-6789 (SSN format)

Full Example

A comprehensive example covering all random variable types with both parameterized and default variants.

{
  "user": {
    "id": "{{random.uuid}}",
    "username": "{{random.alphanumeric(12)}}",
    "usernameShort": "{{random.alphanumeric}}",
    "email": "{{random.email}}",
    "fullName": "{{random.name}}",
    "phone": "{{random.phone}}",
    "website": "{{random.website}}",
    "profileUrl": "{{random.url}}",
    "ipAddress": "{{random.ip}}",
    "bio": "{{random.string(50)}}",
    "bioShort": "{{random.string}}",
    "age": {{random.number(18,65)}},
    "loginCount": {{random.number}},
    "rating": {{random.double(1.0,5.0)}},
    "balance": {{random.double}},
    "isActive": {{random.boolean}},
    "createdAt": "{{random.date}}",
    "referralCode": "{{random.custom([A-Z]{3}-[0-9]{4})}}"
  },
  "order": {
    "orderId": "{{random.uuid}}",
    "trackingNumber": "{{random.custom([A-Z]{2}[0-9]{9}[A-Z]{2})}}",
    "itemCount": {{random.number(1,50)}},
    "totalItems": {{random.number}},
    "subtotal": {{random.double(10.0,1000.0)}},
    "tax": {{random.double}},
    "isPaid": {{random.boolean}},
    "orderDate": "{{random.date}}",
    "customerEmail": "{{random.email}}",
    "customerName": "{{random.name}}",
    "notes": "{{random.string(50)}}",
    "shortNote": "{{random.string}}",
    "sku": "{{random.alphanumeric(8)}}",
    "batchCode": "{{random.alphanumeric}}"
  },
  "analytics": {
    "sessionId": "{{random.uuid}}",
    "visitorIp": "{{random.ip}}",
    "referrerUrl": "{{random.url}}",
    "landingPage": "{{random.website}}",
    "pageViews": {{random.number(1,500)}},
    "totalClicks": {{random.number}},
    "bounceRate": {{random.double(0.0,100.0)}},
    "avgScore": {{random.double}},
    "isNewVisitor": {{random.boolean}},
    "userAgent": "{{random.string(50)}}",
    "deviceId": "{{random.alphanumeric(16)}}",
    "fingerprint": "{{random.alphanumeric}}",
    "campaignCode": "{{random.custom(CMP-[0-9]{6})}}"
  },
  "metadata": {
    "requestId": "{{random.uuid}}",
    "timestamp": "{{random.date}}",
    "serverIp": "{{random.ip}}",
    "gatewayUrl": "{{random.url}}",
    "apiVersion": "{{random.custom(v[0-9].[0-9].[0-9])}}",
    "responseTime": {{random.number(10,2000)}},
    "retryCount": {{random.number}},
    "confidenceScore": {{random.double(0.0,1.0)}},
    "weight": {{random.double}},
    "cached": {{random.boolean}},
    "region": "{{random.alphanumeric(4)}}",
    "zone": "{{random.alphanumeric}}",
    "traceInfo": "{{random.string(30)}}",
    "debugLog": "{{random.string}}",
    "contactEmail": "{{random.email}}",
    "contactName": "{{random.name}}",
    "contactPhone": "{{random.phone}}",
    "docsUrl": "{{random.website}}"
  }
}

Types Covered

uuid
email
name
phone
ip
url
website
date
boolean
number + range
double + range
string + length
alphanumeric + length
custom (regex)

Real-World Use Cases

User Registration: Generate Realistic User Data

When testing user registration flows, you need unique user data for each request. Random variables make this effortless.

POST /api/register - Response Template
RESPONSE BODY (Template)
{
  "success": true,
  "user": {
    "id": "{{random.uuid}}",
    "username": "{{random.alphanumeric(12)}}",
    "email": "{{random.email}}",
    "fullName": "{{random.name}}",
    "phoneNumber": "{{random.phone}}",
    "createdAt": "{{random.date}}",
    "verified": {{random.boolean}}
  },
  "token": "{{random.custom([A-Za-z0-9]{64})}}"
}
ACTUAL RESPONSE (Example)
{
  "success": true,
  "user": {
    "id": "a3d5f7b9-1234-5678-9abc-def012345678",
    "username": "xK8mP2vQ7nT4",
    "email": "[email protected]",
    "fullName": "Sarah Williams",
    "phoneNumber": "+905123456789",
    "createdAt": "2024-08-15",
    "verified": true
  },
  "token": "Xy9Kp2Nm4Qr7Tv3Wz8Ys5Lt6Hu9Jn2Op1Kq4Rw7Sx8Yv3Zu6Kp9Ln2Mt5Qr8Tw1Xv4Yz"
}
Every request generates completely different user data - perfect for testing signup flows!

E-commerce: Dynamic Product Catalog

Generate realistic product catalogs with varying prices, stock levels, and ratings without hardcoding values.

GET /api/products - Response Template
RESPONSE BODY (Template)
{
  "products": [
    {
      "id": "{{random.uuid}}",
      "name": "Wireless Mouse",
      "sku": "{{random.custom([A-Z]{3}-\d{6})}}",
      "price": {{random.double(9.99,199.99)}},
      "stock": {{random.number(0,500)}},
      "rating": {{random.double(1.0,5.0)}},
      "inStock": {{random.boolean}},
      "url": "{{random.url}}"
    },
    {
      "id": "{{random.uuid}}",
      "name": "Mechanical Keyboard",
      "sku": "{{random.custom([A-Z]{3}-\d{6})}}",
      "price": {{random.double(49.99,299.99)}},
      "stock": {{random.number(0,200)}},
      "rating": {{random.double(1.0,5.0)}},
      "inStock": {{random.boolean}},
      "url": "{{random.url}}"
    }
  ],
  "total": {{random.number(50,500)}}
}
ACTUAL RESPONSE (Example)
{
  "products": [
    {
      "id": "b7c3d8e9-5678-1234-bcde-abc123456789",
      "name": "Wireless Mouse",
      "sku": "ABC-582941",
      "price": 34.79,
      "stock": 247,
      "rating": 4.37,
      "inStock": true,
      "url": "https://example.com/api/products/mouse"
    },
    {
      "id": "f2a8b4c6-9012-3456-defg-xyz987654321",
      "name": "Mechanical Keyboard",
      "sku": "XYZ-193847",
      "price": 129.99,
      "stock": 87,
      "rating": 3.84,
      "inStock": false,
      "url": "https://example.org/api/products/keyboard"
    }
  ],
  "total": 342
}
Test different price ranges, stock levels, and ratings with every request!

Analytics Dashboard: Metrics & Statistics

Simulate analytics dashboards with dynamic metrics, visitor counts, and performance data.

GET /api/dashboard/stats - Response Template
RESPONSE BODY (Template)
{
  "stats": {
    "totalUsers": {{random.number(10000,100000)}},
    "activeUsers": {{random.number(1000,10000)}},
    "revenue": {{random.double(50000.0,500000.0)}},
    "conversionRate": {{random.double(1.0,10.0)}},
    "avgSessionTime": {{random.number(120,600)}},
    "bounceRate": {{random.double(20.0,80.0)}}
  },
  "traffic": {
    "visitors": {{random.number(500,5000)}},
    "pageViews": {{random.number(1000,20000)}},
    "uniqueVisitors": {{random.number(300,3000)}}
  },
  "topPages": [
    {
      "url": "{{random.url}}",
      "views": {{random.number(100,5000)}}
    },
    {
      "url": "{{random.url}}",
      "views": {{random.number(100,5000)}}
    }
  ],
  "reportDate": "{{random.date}}"
}
Perfect for testing dashboard UI with varying metrics - see how your charts handle different data ranges!

Tips & Best Practices

βœ…

Random variables are replaced at runtime on each request

βœ…

Each request gets unique random values - perfect for testing

βœ…

Use parameterized types for controlled randomness

⚠️

All generated values respect maximum limits to prevent overflow

🎯

Combine random variables with static data for realistic mock responses

🎯

Email domains use RFC 2606 reserved domains (example.com, example.org, example.net)

⚠️

Remember: String types (ip, email, uuid, etc.) need quotes in JSON. Number/boolean types don't!

πŸ’‘

Use custom regex patterns for industry-specific formats like order IDs, tracking numbers, or license keys

Response Templating

Response templating allows you to reference incoming request data in your response body. Use the format {{request.category.key}} to include request values dynamically.

Available Variables:

CategorySyntaxDescriptionKey Required
path{{request.path.id}}Path parameter valueYes
query{{request.query.name}}Query parameter valueYes
header{{request.header.X-Request-Id}}Request header (case-insensitive)Yes
body{{request.body}}
{{request.body.user.name}}
Full body or field path (JSON/XML)Optional
method{{request.method}}HTTP method (GET, POST, etc.)No
url{{request.url}}Full request URLNo
timestamp{{request.timestamp}}ISO timestamp of requestNo

Path Parameters

GET /users/{id:number}

RESPONSE TEMPLATE

{
  "userId": "{{request.path.id}}",
  "message": "User {{request.path.id}} found",
  "requestedAt": "{{request.timestamp}}"
}

REQUEST: GET /users/42

ACTUAL RESPONSE

{
  "userId": "42",
  "message": "User 42 found",
  "requestedAt": "2024-01-15T10:30:00.000Z"
}

JSON Body & JSONPath

POST /api/users

REQUEST BODY (JSON)

{
  "user": {
    "name": "John Doe",
    "email": "[email protected]"
  },
  "role": "admin"
}

RESPONSE TEMPLATE

{
  "status": "created",
  "userName": "{{request.body.user.name}}",
  "userEmail": "{{request.body.user.email}}",
  "assignedRole": "{{request.body.role}}"
}

ACTUAL RESPONSE

{
  "status": "created",
  "userName": "John Doe",
  "userEmail": "[email protected]",
  "assignedRole": "admin"
}

JSONPath Syntax

user.name - Nested object access
items[0] - Array index access
items[0].product - Nested array access

XML Body & XPath

When the request body is XML (Content-Type: application/xml), you can extract fields using XPath-like dot notation.

POST /api/orders (XML)

REQUEST BODY (XML)

<order>
  <customer>
    <name>John Doe</name>
    <email>[email protected]</email>
  </customer>
  <total>150.00</total>
</order>

RESPONSE TEMPLATE

{
  "orderId": "{{random.uuid}}",
  "customerName": "{{request.body.order.customer.name}}",
  "customerEmail": "{{request.body.order.customer.email}}",
  "orderTotal": "{{request.body.order.total}}"
}

ACTUAL RESPONSE

{
  "orderId": "a1b2c3d4-5678-90ab-cdef-ghij12345678",
  "customerName": "John Doe",
  "customerEmail": "[email protected]",
  "orderTotal": "150.00"
}

XPath Syntax

order.customer.name - Nested element access
users.user.id - Deep nested access
/order/total - Full XPath also supported

Content-Type Detection

Field extraction uses the Content-Type header to determine the parser:

  • application/json β†’ JSONPath
  • application/xml or text/xml β†’ XPath
  • text/plain β†’ No field extraction (use {{request.body}} for full body)

Pro Tips

πŸ’‘

Combine with random variables: {"id": "{{random.uuid}}", "user": "{{request.body.name}}"}

πŸ’‘

Use {{request.body}} without a path to echo the entire request body

πŸ’‘

Headers are case-insensitive: {{request.header.content-type}} works the same as {{request.header.Content-Type}}

πŸ’‘

Missing values return empty string - your response won't break if a field doesn't exist

HTTP/REST Endpoints

πŸ“Š HTTP Status Codes

Every HTTP endpoint can return any valid HTTP status code. Mocklantis supports all standard status codes from 200 to 599. Simply click on the status selector in the Response section to choose from popular codes like 200 OK, 201 Created, 400 Bad Request, 404 Not Found, 500 Internal Server Error, and more.

The status selector is searchable - you can type the code number or the status text to quickly find what you need. For example, search "unauthorized" to find 401, or search "403" to find Forbidden.

Tip: Use appropriate status codes to make your mocks realistic. Return 200 for successful operations, 201 for resource creation, 204 for deletions, 400 for validation errors, 404 for missing resources, and 500 for server errors.

πŸ“„ Request Body Matching

By default, Mocklantis returns your configured response regardless of what's in the request body. But sometimes you want different responses based on the request payload. That's where Request Body Matching comes in.

In the Request β†’ Body tab, you'll find a "Match Body" checkbox. When enabled, Mocklantis will only return your mock response if the incoming request body matches what you specified (as JSON). This is perfect for testing different scenarios - for example, returning success for valid login credentials and error for invalid ones.

Example: Login endpoint with body matching
βœ“ Endpoint 1: POST /auth/login
Match Body: ON
Expected Body: {"email": "[email protected]", "password": "correct"}
Response: {"success": true, "token": "..."} (200)
βœ“ Endpoint 2: POST /auth/login
Match Body: OFF (catches everything else)
Response: {"success": false, "error": "Invalid credentials"} (401)

If no endpoint matches the request body, Mocklantis returns the first endpoint with "Match Body" disabled, or a 404 if all have matching enabled.

🏷️ Request Header Matching

Similar to body matching, you can also match requests based on their headers. In the Request β†’ Headers tab, enable the "Match Headers" checkbox to require specific headers in the incoming request.

This is useful for testing authentication flows (matching Authorization headers), API versioning (matching Accept headers), or any scenario where different headers should trigger different responses.

Example: Protected endpoint
βœ“ Endpoint 1: GET /api/data
Match Headers: ON
Expected: Authorization: Bearer valid-token
Response: {"data": [...]} (200)
βœ“ Endpoint 2: GET /api/data
Match Headers: OFF
Response: {"error": "Unauthorized"} (401)

πŸ” Query Parameter Matching

Query parameters can be validated in two ways: type validation and exact value matching. In the Request β†’ Query Params tab, you can define expected query parameters and choose whether to match their type or exact value.

When "Match Query Params" is enabled, Mocklantis uses a scoring algorithm to find the best matching endpoint. Exact value matches score higher than type matches. If multiple endpoints match, the one with the highest score wins. This allows you to create generic fallback endpoints and specific ones for particular query parameter values.

Example: Search with filters
βœ“ Endpoint 1: GET /api/products
Match Query Params: ON
category = "electronics" (exact match)
Response: {"products": ["laptop", "phone", ...]}
βœ“ Endpoint 2: GET /api/products
Match Query Params: ON
category: string (type match - catches all other categories)
Response: {"products": [...]}

More details about query parameter types and validation can be found in the Query Parameters section below.

🎨 Automatic Content-Type Detection

Mocklantis automatically detects and sets the Content-Type header based on your request and response body content. When you type JSON in the body editor, it automatically adds Content-Type: application/json to the headers. The same works for XML content.

The Monaco editor (used for body editing) also provides syntax highlighting, validation, and formatting based on the detected content type. This makes it easy to work with JSON and XML responses without manually configuring everything.

βš™οΈ Response Configuration

Each HTTP endpoint has a comprehensive response configuration. In the Response section, you can set:

  • Status Code: Any HTTP status from 200 to 599
  • Response Delay: Simulate network latency in milliseconds (0-60000ms)
  • Response Body: JSON, XML, plain text, or any content with syntax highlighting
  • Response Headers: Custom headers returned with the response

The response body supports dynamic variables using the {{random.*}} syntax. For example, {{random.number(1,100)}} generates a random number between 1 and 100 each time the endpoint is called. More details about random variables can be found in the Random Variables section below.

✨ Best Practices

  • β€’ Use realistic HTTP status codes to make your tests more reliable
  • β€’ Create multiple endpoints with different matching rules to simulate various scenarios
  • β€’ Use body matching sparingly - it's powerful but can make debugging harder if overused
  • β€’ Add response delays to test loading states and race conditions in your app
  • β€’ Leverage random variables to generate realistic dynamic data
  • β€’ Document your endpoints with descriptive paths and consistent naming

Path Variables

Path variables allow you to create dynamic endpoints that accept parameters in the URL. Use the format {paramName:type} to define typed parameters.

Example:

/users/{id:number}/posts/{slug:slug}

πŸ“ How to Add Path Variables:

  1. Select your endpoint
  2. In the Path input field (top section), type your path with variables
  3. Use the format: /path/{variable:type}
  4. Example: /users/{id:number}
  5. Path variables are automatically validated on each request

πŸš€ Express.js Style (Alternative)

You can also use the popular Express.js style :paramName format:

/users/:id/posts/:postId

Note: This format doesn't support type validation. Use {id:type} format if you need type validation.

Tip: You can mix both formats: /users/:id/posts/{slug:slug}

Type Reference

TypeDescriptionValid ExamplesInvalid Examples
numberOnly digits (integers)123, 456, 78912abc, 12.5, abc
doubleDecimal numbers12.5, 123, 0.9912abc, abc
stringOnly letters (a-z, A-Z)john, abc, Hello123, abc123, john-doe
alphanumericLetters and digits onlyabc123, 12abc, User1abc-123, test_1, hello!
slugURL-friendly: lowercase + hyphens (must have at least one hyphen)my-post, hello-world, product-123MyPost, my_post, mypost
emailEmail address format[email protected]test.com, @example.com
uuidUUID format (8-4-4-4-12)550e8400-e29b-41d4-a716-446655440000123, abc-def
anyAccepts any value (default)anything works-
custom regexAny regex pattern you define[A-Z]{2}\d{4}, \w{3,16}depends on pattern

Real-World Examples

User Management API

Endpoint:/users/{id:number}
Valid:/users/123
Invalid:/users/abc (returns 400)

Blog Posts

Endpoint:/posts/{slug:slug}/comments/{id:number}
Valid:/posts/my-first-post/comments/42
Invalid:/posts/MyPost/comments/42 (uppercase)
Invalid:/posts/mypost/comments/42 (no hyphen)

Session Verification

Endpoint:/sessions/{sessionId:uuid}
Valid:/sessions/550e8400-e29b-41d4-a716-446655440000
Invalid:/sessions/123

Custom Regex Patterns

Beyond built-in types, you can define custom regex patterns for advanced validation.

Simply use any regex pattern after the colon. If it's not a built-in type, it will be treated as a regex.

Country Code + Order ID

Pattern:{code:[A-Z]{2}\d{4}}(2 letters + 4 digits)
Endpoint:/orders/{code:[A-Z]{2}\d{4}}
Valid:/orders/TR1234, /orders/US9999
Invalid:/orders/tr1234, /orders/T1234, /orders/TR12

MD5 Hash

Pattern:{hash:[a-f0-9]{32}}(32 hex chars)
Endpoint:/files/{hash:[a-f0-9]{32}}
Valid:/files/5d41402abc4b2a76b9719d911017c592
Invalid:/files/abc123 (too short)

Semantic Version

Pattern:{version:\d{1,2}\.\d{1,2}\.\d{1,3}}(e.g., 1.2.3)
Endpoint:/api/{version:\d{1,2}\.\d{1,2}\.\d{1,3}}/users
Valid:/api/1.0.0/users, /api/2.15.3/users
Invalid:/api/v1/users, /api/1.0/users

Username with Length Constraint

Pattern:{username:\w{3,16}}(3-16 alphanumeric)
Endpoint:/profile/{username:\w{3,16}}
Valid:/profile/john_doe, /profile/user123
Invalid:/profile/jo (too short), /profile/verylongusername123

Pro Tips

πŸ’‘

If you don't specify a type, it defaults to any

πŸ’‘

Mix and match multiple path variables: /api/{v:number}/users/{name:string}

πŸ’‘

Type validation happens at runtime - invalid requests return 400 Bad Request

πŸ’‘

Use slug for URL paths like my-blog-post (lowercase + hyphens required)

🎯

Custom regex patterns give you ultimate flexibility - any pattern not matching built-in types is treated as regex

🎯

Remember to escape special regex characters: {version:\d{1,2}\.\d{1,2}} not {version:\d{1,2}.\d{1,2}}

Headers

Request Headers

Validate incoming request headers to ensure clients send the correct headers. Enable Match Request Headers toggle to activate validation.

πŸ“ How to Add Request Headers:

  1. Select your endpoint
  2. Click on the Request tab
  3. Click on the Headers sub-tab
  4. Click + Add Header button
  5. Enter header name and expected value
  6. Enable Match Request Headers toggle

Example:

Authorization: Bearer token123
Content-Type: application/json

βœ“ Requests with matching headers β†’ 200 OK
βœ— Requests with missing/incorrect headers β†’ 400 Bad Request

πŸ“˜ Real World Examples: Content-Type Behavior

Mocklantis aims to provide a high-quality mocking experience by simulating real-world HTTP behavior. We automatically add Content-Type headers for your convenience, but our validation logic follows actual HTTP standards to ensure your mocks behave like production servers.

How it works: When you add a request body, Mocklantis automatically detects and sets the Content-Type (JSON, XML, URL Encoded, or plain text). However, when validating incoming requests with Match Request Headers enabled, we follow real-world rules:

βœ… Scenario 1: No Body β†’ Content-Type Optional

GET /api/users

When there's no request body (common in GET, HEAD, DELETE), Content-Type is not validated. This matches how real HTTP servers behave - no body means Content-Type doesn't matter.

⚠️ Scenario 2: Body Present, No Content-Type β†’ Error

POST /api/users
{"name": "James"}

If a request includes a body but no Content-Type header, validation fails with 400 Bad Request. This protects your mock endpoints from malformed requests, just like real production APIs.

βœ… Scenario 3: Body + Content-Type β†’ Perfect

POST /api/users
Content-Type: application/json

{"name": "David"}

A complete, well-formed request with proper Content-Type header passes validation successfully. This is the standard way HTTP clients (like Postman, fetch, axios) send requests.

Why this matters: By following real-world HTTP behavior, Mocklantis helps you catch integration issues early. If your client sends requests without proper headers, you'll discover it during development, not in production.

Response Headers

Add custom headers to your mock responses. Useful for CORS, caching, content-type, and more.

πŸ“ How to Add Response Headers:

  1. Select your endpoint
  2. Click on the Response tab
  3. Click on the Headers sub-tab
  4. Click + Add Header button
  5. Enter header name and value

Common Examples:

Content-Type: application/json
Content-Type: application/xml
Content-Type: application/x-www-form-urlencoded
Access-Control-Allow-Origin: *
Cache-Control: no-cache
X-Custom-Header: custom-value

Real-World Use Cases

Authentication: Bearer Token & API Keys

Test protected endpoints by validating authentication headers. Create multiple endpoints to simulate authenticated vs unauthenticated scenarios.

Endpoint 1: Authenticated Access
REQUEST HEADERS (Required)
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
Enable "Match Request Headers" toggle
REQUEST
GET /api/profile
With Authorization header
RESPONSE (200 OK)
{
  "userId": "12345",
  "name": "John Doe",
  "email": "[email protected]"
}
Endpoint 2: Unauthenticated Access (Same Path)
REQUEST HEADERS
No Authorization header (or "Match Request Headers" disabled)
REQUEST
GET /api/profile
Without Authorization header
RESPONSE (401 Unauthorized)
{
  "error": "Unauthorized",
  "message": "Missing or invalid authentication token"
}

πŸ’‘ How it Works:

Create two endpoints with the same path /api/profile. Endpoint 1 requires the Authorization header and returns user data. Endpoint 2 doesn't match headers and returns 401. Mocklantis automatically routes requests based on header presence!

CORS: Cross-Origin Resource Sharing

Test CORS behavior by adding appropriate response headers. Essential for frontend development with different origins.

CORS-Enabled Endpoint
RESPONSE HEADERS (Add these)
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
Access-Control-Allow-Headers: Content-Type, Authorization
Access-Control-Max-Age: 86400
USE CASE

When your frontend (localhost:3000) calls your mock API (localhost:8021), browsers require CORS headers. Add these response headers to enable cross-origin requests.

⚠️ Preflight Requests:

Browsers send OPTIONS requests before actual requests for CORS. Create a separate OPTIONS endpoint with the same path and return 204 No Content with CORS headers to handle preflight requests.

API Versioning

Test different API versions by validating version headers or using path variables.

Version-Based Routing
OPTION 1: Header-Based Versioning
Request Header: Accept: application/vnd.myapi.v2+json
Create separate endpoints with different Accept headers
OPTION 2: Custom Version Header
Request Header: X-API-Version: 2.0
Enable "Match Request Headers" and set X-API-Version value
EXAMPLE
Endpoint 1: GET /api/users
Header: X-API-Version: 1.0
Response: Legacy format
Endpoint 2: GET /api/users
Header: X-API-Version: 2.0
Response: New format with extra fields

Content Negotiation: JSON vs XML

Return different response formats based on the Accept header.

Format-Based Response
JSON ENDPOINT
Request Header: Accept: application/json
Response Header: Content-Type: application/json
Response Body:
{"name": "John", "age": 30}
XML ENDPOINT (Same Path)
Request Header: Accept: application/xml
Response Header: Content-Type: application/xml
Response Body:
<user>
  <name>John</name>
  <age>30</age>
</user>

Rate Limiting Headers

Simulate rate limiting by returning appropriate headers in responses.

Response Headers for Rate Limiting
ADD THESE RESPONSE HEADERS
X-RateLimit-Limit: 100 (max requests per hour)
X-RateLimit-Remaining: 95 (remaining requests)
X-RateLimit-Reset: 1699891200 (reset timestamp)
Retry-After: 3600 (seconds until reset)
TEST SCENARIO

Create an endpoint that returns 429 Too Many Requests with these headers to test how your app handles rate limiting.

Caching & Cache Control

Control caching behavior with response headers.

Common Caching Scenarios
No Cache (Always Fresh)
Cache-Control: no-cache, no-store, must-revalidate
Cache for 1 Hour
Cache-Control: max-age=3600
Cache for 1 Day (Public)
Cache-Control: public, max-age=86400
Private Cache (User-Specific)
Cache-Control: private, max-age=3600

Pro Tips

πŸ’‘

Request header validation is strict - headers must match exactly

πŸ’‘

Response headers are sent with every response automatically

πŸ’‘

Use response headers to simulate real API behavior (CORS, auth tokens, etc.)

✨

Create multiple endpoints with the same path but different header requirements to test authenticated vs unauthenticated flows

🎯

Use header matching for API versioning - same path, different responses based on version header

Multiple Responses

What is Multiple Responses?

Multiple Responses is a powerful feature that allows a single endpoint to return different responses on successive requests. Instead of always returning the same static response, the endpoint cycles through a list of predefined responses based on your chosen mode.

This feature is essential for testing real-world scenarios where API responses change over time, such as polling for job status, handling retries after failures, or simulating state machines.

Why is this important? Real APIs rarely return the same response every time. A payment API might return "pending" β†’ "processing" β†’ "completed". A file upload API might fail on the first attempt but succeed on retry. Multiple Responses lets you simulate all these scenarios without writing complex mock logic.

When to Use Multiple Responses

Perfect For
  • β€’ Testing polling mechanisms
  • β€’ Simulating retry logic
  • β€’ State machine transitions
  • β€’ Paginated API responses
  • β€’ Rate limiting scenarios
  • β€’ Progressive loading states
  • β€’ A/B testing simulation
  • β€’ Error recovery flows
Not Needed For
  • β€’ Static endpoints that always return the same data
  • β€’ Simple CRUD operations
  • β€’ Endpoints where response varies by request parameters (use Request Matching instead)

How It Works

When Multiple Responses is enabled, Mocklantis maintains an internal counter for each endpoint. Every time the endpoint is called, the counter advances and the corresponding response is returned.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    Multiple Responses Flow                       β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                  β”‚
β”‚   Request #1 ────────────► Response #1 (status: "pending")      β”‚
β”‚                                   β”‚                              β”‚
β”‚                                   β–Ό                              β”‚
β”‚   Request #2 ────────────► Response #2 (status: "processing")   β”‚
β”‚                                   β”‚                              β”‚
β”‚                                   β–Ό                              β”‚
β”‚   Request #3 ────────────► Response #3 (status: "completed")    β”‚
β”‚                                   β”‚                              β”‚
β”‚                                   β–Ό                              β”‚
β”‚   Request #4 ────────────► Response #1 (cycles back)            β”‚
β”‚                            [Sequential Mode]                     β”‚
β”‚                                                                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Point: The sequence state is maintained in memory. Each endpoint has its own independent counter, so multiple endpoints can have different sequence states simultaneously.

Response Modes Explained

The mode determines how Mocklantis cycles through your responses. Choose the mode that best matches your testing scenario.

SEQUENTIALDefault Mode

Cycles through all responses in order, then loops back to the beginning. This creates an infinite cycle of responses.

With 3 responses:

1 β†’ 2 β†’ 3 β†’ 1 β†’ 2 β†’ 3 β†’ 1 β†’ 2 β†’ 3 β†’ ...

Best Use Cases:

  • β€’ Polling workflows: Simulate a job going through states repeatedly
  • β€’ Round-robin responses: Return different data each time for variety
  • β€’ Pagination testing: Cycle through pages of data
  • β€’ Load balancer simulation: Simulate responses from different backend servers
RANDOM

Picks a random response each time. Each response has an equal probability of being selected, regardless of what was returned before.

With 3 responses:

2 β†’ 1 β†’ 1 β†’ 3 β†’ 2 β†’ 3 β†’ 1 β†’ 2 β†’ ... (unpredictable)

Best Use Cases:

  • β€’ Chaos testing: Simulate unpredictable API behavior
  • β€’ Flaky service simulation: Sometimes succeeds, sometimes fails randomly
  • β€’ A/B testing: Return different variants randomly
  • β€’ Load balancer with failures: Randomly hit healthy or unhealthy backends
REPEAT LAST

Goes through all responses once in order, then keeps returning the last response forever. This is perfect for simulating one-time state transitions.

With 3 responses:

1 β†’ 2 β†’ 3 β†’ 3 β†’ 3 β†’ 3 β†’ 3 β†’ ... (sticks on last)

Best Use Cases:

  • β€’ Retry logic testing: Fail N times, then succeed permanently
  • β€’ Job completion: pending β†’ processing β†’ completed (stays completed)
  • β€’ One-time errors: Return error once, then work normally
  • β€’ Initialization flows: First call sets up, subsequent calls return data

How to Enable Multiple Responses

1

Select an HTTP Endpoint

Click on any HTTP endpoint in your server's endpoint list to open the editor.

2

Open the Multiple Responses Tab

In the endpoint editor, click on the "Multiple Responses" tab (usually next to Body, Headers tabs).

3

Activate Multiple Responses

Check the "Activate Multiple Responses" checkbox. A default first response will be created automatically.

4

Choose Your Mode

Select Sequential, Random, or Repeat Last based on your testing scenario.

5

Add and Configure Responses

Click "+ Add Response" to add more responses. Configure each with its own status, body, headers, and delay.

6

Save the Endpoint

Click Save to apply your changes. The endpoint will now cycle through your configured responses.

Response Configuration

Each response in your sequence is fully independent. You can configure different status codes, bodies, headers, and delays for each response.

PropertyDescription
LabelA descriptive name for the response (e.g., "Initial State", "Error Response", "Success"). Double-click to edit.
Status CodeHTTP status (200, 404, 500, etc.) - can differ per response. Allows simulating success β†’ failure patterns.
Response BodyJSON, XML, or plain text body. Each response can have completely different content.
HeadersCustom headers for this specific response. Useful for rate limit headers, cache headers, etc.
DelayResponse delay in milliseconds. Simulate slow responses, timeouts, or varying latency per response.

Response #1 (Pending):

{ "status": "pending", "progress": 0 }

Response #2 (Completed):

{ "status": "completed", "progress": 100 }

Advanced Options

By default, the sequence advances after every request. These advanced options give you more control over when the sequence advances.

REQUEST COUNT

Return the same response for a specific number of requests before advancing to the next. This is controlled per-response, not globally.

How to Enable:
  1. 1. Check "Enable Request Count" checkbox
  2. 2. Expand each response
  3. 3. Set the "Request Count" value (e.g., 3)
  4. Default: 1 request if left empty

Configuration:

Response #1: requestCount = 3

Response #2: requestCount = 2

Response #3: requestCount = 1

Result (Sequential mode):

1 β†’ 1 β†’ 1 β†’ 2 β†’ 2 β†’ 3 β†’ 1 β†’ 1 β†’ 1 β†’ 2 β†’ 2 β†’ 3 β†’ ...

Use Case: Simulate rate limiting - allow 5 successful requests, then return 429 Too Many Requests.

EXPIRE TIME

Return the same response for a specified duration (in milliseconds) before advancing. Time-based advancement instead of request-count based.

How to Enable:
  1. 1. Check "Enable Response Expire Time" checkbox
  2. 2. Expand each response
  3. 3. Set the "Expire Time" value in milliseconds (e.g., 5000 = 5 seconds)
  4. Default: 5000ms (5 seconds) if left empty

Configuration:

Response #1: expireTime = 10000 (10 seconds)

Response #2: expireTime = 5000 (5 seconds)

Result:

All requests in first 10s β†’ Response #1

All requests in next 5s β†’ Response #2

Then cycles back (Sequential mode)

Use Case: Simulate a job that's "pending" for 30 seconds, then becomes "completed" - regardless of how many times you poll.

Note: Request Count and Expire Time can be used together. When both are set on a response, the sequence advances when EITHER condition is met (whichever comes first).

Real-World Example Scenarios

1. Job Status Polling (Long-Running Task)

Simulate an async job that progresses through multiple states. Common in file processing, report generation, or payment processing APIs.

Response #1 - Queued:

{ "jobId": "abc123", "status": "queued", "progress": 0 }

Response #2 - Processing:

{ "jobId": "abc123", "status": "processing", "progress": 50 }

Response #3 - Finalizing:

{ "jobId": "abc123", "status": "finalizing", "progress": 90 }

Response #4 - Completed:

{ "jobId": "abc123", "status": "completed", "progress": 100 }

Mode: Repeat Last - Once completed, always returns completed state

2. Testing Retry Logic with Failures

Simulate a flaky service that fails multiple times before succeeding. Test that your app correctly implements retry logic.

Response #1 - Server Error:

Status: 503 | { "error": "Service temporarily unavailable" }

delay: 100ms

Response #2 - Timeout:

Status: 504 | { "error": "Gateway timeout" }

delay: 5000ms (simulates slow response)

Response #3 - Success:

Status: 200 | { "data": "Here's your data!" }

delay: 50ms

Mode: Repeat Last - After retries, API works normally

3. Simulating Cursor-Based Pagination

Return different pages of data on subsequent requests. Test that your app correctly handles pagination and stops when there's no more data.

Response #1 - Page 1:

{ "items": [...10 items...], "nextCursor": "abc", "hasMore": true }

Response #2 - Page 2:

{ "items": [...10 items...], "nextCursor": "def", "hasMore": true }

Response #3 - Last Page:

{ "items": [...5 items...], "nextCursor": null, "hasMore": false }

Mode: Repeat Last - Last page keeps returning empty/final state

4. Rate Limiting Simulation

Test how your app handles rate limits. Allow N requests, then return 429 Too Many Requests.

Response #1 - Success (requestCount: 5):

Status: 200 | { "data": "OK" }

Headers: X-RateLimit-Remaining: 5

Response #2 - Rate Limited:

Status: 429 | { "error": "Too Many Requests", "retryAfter": 60 }

Headers: Retry-After: 60

Mode: Sequential with Request Count - Allows 5 requests, then blocks, then resets

5. Optimistic UI with Delayed Confirmation

Simulate an API that accepts a change immediately (202) but takes time to process. Subsequent GET requests show the updated state.

Response #1 - Accepted (requestCount: 1):

Status: 202 | { "message": "Change accepted, processing..." }

Response #2 - Still Processing (requestCount: 2):

Status: 200 | { "status": "processing", "oldValue": "..." }

Response #3 - Updated:

Status: 200 | { "status": "completed", "newValue": "..." }

Mode: Repeat Last - Final state persists

6. Deployment / Maintenance Window Testing

Simulate a service that's initially unavailable (during deployment) and then becomes healthy.

Response #1 - Deploying (expireTime: 30000):

Status: 503 | { "error": "Service deploying, please wait" }

Response #2 - Ready:

Status: 200 | { "status": "healthy", "version": "2.0.0" }

Mode: Repeat Last with Expire Time - Down for 30s, then healthy forever

Managing Responses

Reorder Responses

Use the up/down arrow buttons on each response to change the order. Order matters in Sequential and Repeat Last modes.

Delete Responses

Click the trash icon to delete a response. You cannot delete the last remaining response (at least one is required).

Edit Labels

Double-click on a response label to edit it inline. Press Enter or click away to save.

Add Response

Click "+ Add Response" to add a new response to the end of the list. It will be automatically numbered and expanded for editing.

Tips and Best Practices

πŸ’‘

Use descriptive labels

Name your responses based on what they represent: "Initial Load", "After 3 Retries", "Rate Limited", "Final Success". This makes debugging much easier.

πŸ’‘

Sequence resets on server stop

The response counter resets when you stop the server. This ensures consistent behavior when running tests - each test run starts fresh.

πŸ’‘

Combine with different delays

Add varying delays to responses to simulate real-world conditions. First response slow (cold start), subsequent responses fast (cached).

πŸ’‘

Use Repeat Last for most scenarios

Repeat Last is the most common mode. It simulates a state machine that eventually reaches a final state. Use Sequential only when you need infinite cycling.

πŸ’‘

Test edge cases with Request Count

Use Request Count to test specific scenarios: "What happens on exactly the 5th request?" or "After 10 successful calls, start failing."

πŸ’‘

Check the logs

Use Mocklantis logs to see which response was returned for each request. This helps verify your sequence is working as expected.

Chaos Engineering

What is Chaos Engineering?

Chaos Engineering is a comprehensive fault injection and network simulation system that lets you test how your application handles real-world failures. Instead of hoping your app works when things go wrong, you can systematically verify it.

Mocklantis provides four powerful chaos capabilities that can be combined together:

Latency

Slow responses

Errors

Random failures

Corruption

Broken data

Rate Limit

Request throttling

Key Advantage: All chaos settings take effect immediately without restarting the server. Change error rates, latency, or corruption on the fly while your tests are running.

How to Enable Chaos Engineering

1

Select an HTTP Endpoint

Click on any HTTP endpoint in your server's endpoint list to open the editor.

2

Open the Chaos Tab

In the left sidebar of the endpoint editor, click on the red "Chaos" tab.

3

Activate Chaos Engineering

Toggle "Activate Chaos Engineering" to enable the feature. Four sub-tabs appear: Latency, Errors, Corruption, Rate Limit.

4

Configure Your Chaos Settings

Navigate between tabs to configure latency, errors, corruption, and rate limits. All settings are per-endpoint.

5

Save and Test

Save the endpoint. Chaos effects are applied immediately - no server restart needed.

LATENCY

Latency Injection

Add artificial delay to responses to test timeout handling, loading states, and user experience under slow network conditions. Choose from four latency modes:

NONE

No additional latency. Responses are sent immediately (or with endpoint's base delay if set).

FIXED

Add a constant delay to every response. Useful for simulating consistent network latency.

Configuration:

Fixed delay: 2000ms

Result:

Every request waits exactly 2 seconds

Use case: Test loading spinners, timeout configurations, user patience thresholds

RANDOM

Random delay within a range (uniform distribution). Each request gets a different delay.

Configuration:

Min: 100ms, Max: 3000ms

Result:

Request 1: 847ms, Request 2: 2341ms, Request 3: 156ms...

Use case: Simulate unpredictable network conditions, test race conditions

LOG-NORMALRecommended for realistic simulation

Realistic latency distribution that matches real-world production patterns. Most requests are fast, but occasional requests are much slower (long tail).

Configuration:

Median: 200ms, Sigma: 0.8

Result (example):

50% of requests: 100-300ms (near median)

40% of requests: 300-800ms (moderately slow)

10% of requests: 800-3000ms+ (tail latency)

Understanding Sigma:

Οƒ = 0.3

Tight clustering

Most requests near median

Οƒ = 0.8

Moderate spread

Realistic production

Οƒ = 1.5

Wide spread

Occasional very slow

Use case: Production-like testing, P99 latency handling, performance budgets

Note: Chaos latency overrides the endpoint's base delay setting when enabled. Maximum latency is capped at 30 seconds to prevent hanging connections.

ERRORS

Error Injection

Randomly return error responses instead of successful ones. Configure the failure rate and define multiple different error responses that are randomly selected.

Error Rate (0-100%)

The percentage of requests that will fail. Set to 0% to disable, 100% to fail all requests.

Example: Error rate = 15%

Out of 100 requests:

β€’ ~85 requests β†’ Normal response (200 OK)

β€’ ~15 requests β†’ Random error from your list

Multiple Error Responses

Define multiple error responses with different status codes and bodies. When an error is triggered, one is randomly selected. This simulates real-world variety in failures.

Configuration example:

Error 1: 500 - {"error": "Internal Server Error"}

Error 2: 502 - {"error": "Bad Gateway"}

Error 3: 503 - {"error": "Service Unavailable"}

Error 4: 504 - {"error": "Gateway Timeout"}

When error triggers, one is randomly picked

Quick add buttons:500502503504429
Custom Error Bodies

Each error response can have a custom JSON body. Match your real API's error format to test your client's error parsing.

Example custom body:

{
  "error": {
    "code": "SERVICE_UNAVAILABLE",
    "message": "The service is temporarily unavailable",
    "retryAfter": 30
  }
}

Tip: When an error is injected, the response includes an X-Chaos: error-injection header so you can identify chaos-induced failures in your logs.

CORRUPTION

Response Corruption

Corrupt the response data to test how your application handles malformed or incomplete data. This is crucial for testing defensive programming and graceful degradation.

Corruption Rate (0-100%)

The percentage of successful responses that will be corrupted. Error responses (from error injection) are not corrupted.

DROP FIELDS

Remove specific fields from the JSON response. Supports nested paths with dot notation.

Original response:

{"id": 1, "name": "Alice", "email": "[email protected]", "role": "admin"}

Target fields: ["email", "role"]

Corrupted response:

{"id": 1, "name": "Alice"}

Nested path example:

{"user": {"profile": {"contact": {"email": "[email protected]", "phone": "123"}}}}

Target fields: ["user.profile.contact.email"]

Corrupted response:

{"user": {"profile": {"contact": {"phone": "123"}}}}

Use case: Test null checks, optional field handling, backward compatibility

NULLIFY FIELDS

Set specific fields to null instead of removing them. Supports nested paths with dot notation.

Original response:

{"id": 1, "name": "Alice", "email": "[email protected]"}

Target fields: ["email"]

Corrupted response:

{"id": 1, "name": "Alice", "email": null}

Nested path example:

{"company": {"info": {"details": {"taxId": "123"}}}}

Target fields: ["company.info.details.taxId"]

Corrupted response:

{"company": {"info": {"details": {"taxId": null}}}}

Use case: Test nullable type handling, TypeScript strict null checks

TRUNCATE

Cut the response at a random position, simulating incomplete data transfer or connection drops.

Original response:

{"users": [{"id": 1, "name": "Alice"}, {"id": 2, "name": "Bob"}]}

Truncated response (random cut):

{"users": [{"id": 1, "na

Use case: Test JSON parse error handling, connection interruption recovery

MALFORMED JSON

Append invalid syntax to break JSON parsing. Creates syntactically invalid JSON.

Original response:

{"id": 1, "name": "Alice"}

Malformed response:

{"id": 1, "name": "Alice"},,invalid

Use case: Test JSON parser error handling, try-catch coverage

Tip: When corruption is applied, the response includes an X-Chaos: corruption header. The Content-Type remains application/json even for malformed responses.

RATE LIMIT

Rate Limiting Simulation

Simulate API rate limiting to test how your application handles throttling. Configure request limits, time windows, and custom rate-limit responses.

Configuration Options
MAX REQUESTS

Number of allowed requests per time window (1-10,000)

WINDOW

Time window in seconds (1-3,600 = 1 hour max)

STATUS

HTTP status when rate limited (default: 429 Too Many Requests)

BODY

Custom JSON response body when rate limited

How It Works

Configuration: 5 requests per 60 seconds

Request 1 (0:00) β†’ 200 OK βœ“

Request 2 (0:10) β†’ 200 OK βœ“

Request 3 (0:20) β†’ 200 OK βœ“

Request 4 (0:30) β†’ 200 OK βœ“

Request 5 (0:40) β†’ 200 OK βœ“

Request 6 (0:45) β†’ 429 Too Many Requests βœ—

Request 7 (0:50) β†’ 429 Too Many Requests βœ—

--- Window resets at 1:00 ---

Request 8 (1:05) β†’ 200 OK βœ“

Each endpoint has its own independent counter. The window is a sliding window that resets after the configured time period.

Custom Rate Limit Response

Define a custom JSON body to match your real API's rate limit format:

{
  "error": "Too Many Requests",
  "message": "Rate limit exceeded. Please retry after 60 seconds.",
  "retryAfter": 60,
  "limit": 5,
  "remaining": 0
}

Key Feature: Rate limiting is evaluated before other chaos effects. If a request is rate-limited, it returns immediately without applying latency, errors, or corruption.

Combining Chaos Effects

The real power of Mocklantis Chaos Engineering is combining multiple effects on a single endpoint. This creates realistic production-like failure scenarios.


    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚                             CHAOS EVALUATION ORDER                                  β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

                                        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                        β”‚ Request β”‚
                                        β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
                                             β”‚
                                             β–Ό
                               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                               β”‚    Rate Limit Check     β”‚
                               β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                            β”‚
                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                              β”‚                           β”‚
                              β–Ό                           β–Ό
                      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                      β”‚ Rate Limited β”‚           β”‚   Continue   β”‚
                      β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜           β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
                             β”‚                          β”‚
                             β–Ό                          β–Ό
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚   Return 429    β”‚     β”‚  Latency Calculation    β”‚
                    β”‚   immediately   β”‚     β”‚  (fixed/random/lognorm) β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                        β”‚
                                                        β–Ό
                                           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                           β”‚      Error Check        β”‚
                                           β”‚   (X% probability)      β”‚
                                           β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                        β”‚
                                          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                          β”‚                           β”‚
                                          β–Ό                           β–Ό
                                  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                  β”‚ Error Fires  β”‚           β”‚   No Error   β”‚
                                  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜           β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
                                         β”‚                          β”‚
                                         β–Ό                          β–Ό
                                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                β”‚  Return Error   β”‚     β”‚    Build Response       β”‚
                                β”‚  Response       β”‚     β”‚    (normal endpoint)    β”‚
                                β”‚  + Apply Delay  β”‚     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                  β”‚
                                                                     β–Ό
                                                        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                                        β”‚    Corruption Check     β”‚
                                                        β”‚     (X% probability)    β”‚
                                                        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                                     β”‚
                                                       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                                       β”‚                           β”‚
                                                       β–Ό                           β–Ό
                                               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                               β”‚  Corrupted   β”‚           β”‚   Normal     β”‚
                                               β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜           β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
                                                      β”‚                          β”‚
                                                      β–Ό                          β–Ό
                                             β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                             β”‚ Apply Corruptionβ”‚       β”‚ Keep Original   β”‚
                                             β”‚ (drop/null/etc) β”‚       β”‚ Response Body   β”‚
                                             β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜       β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                      β”‚                         β”‚
                                                      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                                   β”‚
                                                                   β–Ό
                                                      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                                      β”‚     Apply Latency       β”‚
                                                      β”‚   (calculated delay)    β”‚
                                                      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                                   β”‚
                                                                   β–Ό
                                                      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                                      β”‚     Send Response       β”‚
                                                      β”‚   + X-Chaos Header      β”‚
                                                      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Example: Production-Like API

Configuration:

  • β€’ Rate Limit: 100 req/60s
  • β€’ Latency: Log-normal (200ms, Οƒ=0.5)
  • β€’ Errors: 5% failure rate [500, 502, 503]
  • β€’ Corruption: 2% drop fields ["debug"]

Resulting behavior:

  • β€’ Most requests: 150-300ms, success
  • β€’ Some requests: 500ms-2s (tail latency)
  • β€’ ~5% of requests: Random 5xx error
  • β€’ ~2% of successes: Missing "debug" field
  • β€’ Over 100/min: 429 rate limited

Real-World Example Scenarios

1. Testing Client Timeout Configuration

Your app has a 5-second timeout. Will it handle slow responses gracefully?

Latency: Random 1000-8000ms

Expected behavior:

β€’ Some requests complete (1-5s)

β€’ Some requests timeout (5-8s)

β€’ Client should show timeout error, not hang

2. Testing Circuit Breaker Pattern

Does your circuit breaker open after consecutive failures?

Errors: 50% failure rate [503]

Expected behavior:

β€’ Circuit breaker should track failure rate

β€’ After threshold, circuit opens

β€’ Requests fail fast without hitting API

β€’ After cooldown, circuit half-opens for test

3. Testing Retry with Exponential Backoff

Does your app retry failed requests with proper backoff?

Errors: 30% failure rate [502, 503]

Latency: Fixed 500ms

Expected behavior:

β€’ First failure β†’ retry after 1s

β€’ Second failure β†’ retry after 2s

β€’ Eventually succeeds (70% success rate)

β€’ Max retries reached β†’ show error to user

4. Testing Graceful Degradation

Does your app degrade gracefully when optional data is missing?

Corruption: Drop fields ["recommendations", "related_products"] at 20%

Expected behavior:

β€’ App renders product page

β€’ Recommendations section shows "Not available"

β€’ No JavaScript errors in console

β€’ Core functionality unaffected

5. Testing Rate Limit Handling

Does your app respect rate limits and back off appropriately?

Rate Limit: 10 requests per 30 seconds

Expected behavior:

β€’ App detects 429 response

β€’ Shows "Please wait" message to user

β€’ Queues or delays further requests

β€’ Resumes after Retry-After period

6. Simulating Mobile/Slow Network

How does your app behave on a 3G connection?

Latency: Log-normal (median 800ms, Οƒ=1.0)

Errors: 5% failure rate [0] (connection drop)

Corruption: Truncate at 3%

Expected behavior:

β€’ Skeleton loaders show during slow loads

β€’ Optimistic UI updates for better UX

β€’ Reconnection logic for dropped connections

β€’ Proper error boundaries for corrupted data

7. Simulating Third-Party Payment API

Your checkout integrates with a payment provider. What happens when they have issues?

Latency: Log-normal (median 1500ms, Οƒ=0.8)

Errors: 8% [502, 503, 504]

Rate Limit: 50 requests per 60 seconds

Expected behavior:

β€’ Payment form shows loading state

β€’ Errors trigger retry with user notification

β€’ Rate limit queues requests, doesn't fail checkout

β€’ Timeout after 10s shows "Try again later"

Chaos Response Headers

When chaos effects are triggered, Mocklantis adds diagnostic headers to help you identify which effect caused the response:

HeaderValueMeaning
X-Chaoserror-injectionResponse is an injected error (not real failure)
X-ChaoscorruptionResponse body was corrupted
X-Chaosrate-limitRequest was rate limited

Tip: In your test assertions, you can check for these headers to verify chaos is working as expected without parsing response bodies.

Tips and Best Practices

πŸ’‘

Start with low percentages

Begin with 5-10% error rates and low corruption. Increase gradually to find your app's breaking points without overwhelming your testing.

πŸ’‘

Use Log-Normal for realistic latency

Fixed delays are useful for specific tests, but Log-Normal distribution with Οƒ around 0.5-0.8 best simulates real production network behavior.

πŸ’‘

Test different endpoints differently

Critical endpoints (login, checkout) should be tested with higher chaos. Less critical endpoints can have lower rates. Each endpoint has independent settings.

πŸ’‘

Match your real API's error format

Use custom error bodies that match your production API's error structure. This ensures your error handling code is tested against realistic payloads.

πŸ’‘

Combine effects for production simulation

Real APIs have latency AND occasional errors AND rare data issues. Use multiple chaos features together to simulate this realistically.

πŸ’‘

Check X-Chaos headers in your tests

Use the X-Chaos header to distinguish between chaos-induced failures and actual bugs in your test assertions.

πŸ’‘

No restart needed

Adjust chaos settings on the fly while your app is running. This is perfect for iterative testing - tweak error rates until you find edge cases.

State Machine (Scenarios)

What is State Machine?

State Machine (also called Scenarios) enables stateful mock behavior where your HTTP endpoints respond differently depending on the current state of a scenario. Instead of returning the same response every time, endpoints can follow a flow of states.

This is essential for testing multi-step workflows where the API behavior changes over time: user registration flows, payment processing, order lifecycles, authentication sequences, and more.


    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚                         STATE MACHINE EXAMPLE                                β”‚
    β”‚                           Order Processing                                   β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                              β”‚   Initial   β”‚
                              β”‚  (Pending)  β”‚
                              β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
                                     β”‚
                                     β–Ό
                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                              β”‚  Confirmed  β”‚
                              β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
                                     β”‚
                        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                        β”‚                         β”‚
                        β–Ό                         β–Ό
                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                 β”‚  Shipped    β”‚          β”‚  Cancelled  β”‚
                 β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        β”‚
                        β–Ό
                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                 β”‚  Delivered  β”‚
                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

    GET /api/orders/123 returns different response based on current state:

    β€’ Initial    β†’ { "status": "pending", "canCancel": true }
    β€’ Confirmed  β†’ { "status": "confirmed", "estimatedDelivery": "..." }
    β€’ Shipped    β†’ { "status": "shipped", "trackingNumber": "..." }
    β€’ Delivered  β†’ { "status": "delivered", "deliveredAt": "..." }
    β€’ Cancelled  β†’ { "status": "cancelled", "refundStatus": "..." }
Visual Flow

Drag & drop graph editor

Real-Time

Instant state updates

No Restart

Changes apply instantly

Key Advantage: See exactly which state your scenario is in, watch it change in real-time as requests come in, and visually design complex multi-step flows with our drag-and-drop graph editor.

When to Use State Machine

Perfect For
  • β€’ Multi-step workflows (checkout, registration)
  • β€’ Order/payment lifecycle testing
  • β€’ Authentication flows (login β†’ 2FA β†’ verified)
  • β€’ Document approval processes
  • β€’ Subscription state changes
  • β€’ Booking systems (pending β†’ confirmed β†’ completed)
  • β€’ Complex branching logic (success vs failure paths)
  • β€’ Long-running async job simulation
Not Needed For
  • β€’ Simple CRUD endpoints
  • β€’ Stateless API responses
  • β€’ Random failures (use Chaos Engineering)
  • β€’ Cycling through responses (use Multiple Responses)

Creating a Scenario

1

Click "+ New Endpoint"

In your server's endpoint list, click the "+ New Endpoint" button.

2

Select "Scenario"

Choose "Scenario" from the endpoint type dropdown. This creates a state machine.

3

Name Your Scenario

Give it a descriptive name like "Order Flow", "User Registration", or "Payment Process". This name is used to bind endpoints.

4

Add States

Every scenario starts with an "Initial" state. Add more states using the "+ Add State" button. States represent different phases of your workflow.

5

Define Transitions

Use the visual flow graph to draw transitions between states. Connect states by dragging from one node to another.

Tip: The "Initial" state cannot be deleted. It's always the starting point when the scenario is reset.

Visual Flow Graph

The flow graph is a powerful visual editor that lets you design your state machine with drag-and-drop simplicity. No code required - just draw your workflow.

State Nodes
Initial
Starting state (purple)
Current
Current state (green + pulsing dot)
Other
Other states (gray)
Graph Controls
  • Drag nodes: Reposition states anywhere
  • Draw connection: Drag from a handle to another node
  • Delete edge: Shift+click to select, then Delete key
  • Fullscreen: Expand button for larger view
  • MiniMap: Navigate large graphs easily
  • Auto-fit: Center all nodes in view
Connection Handles

Each state node has 4 connection points (handles) - top, right, bottom, and left. Drag from any handle to create a transition to another state.

State

Hover over a node to see connection handles

Note: Node positions are automatically saved. When you reopen the scenario, your layout is preserved exactly as you left it.

Binding Endpoints to Scenarios

After creating a scenario, you need to bind HTTP endpoints to it. Each bound endpoint can return different responses based on the scenario's current state.

1

Open an HTTP Endpoint

Create a new HTTP endpoint or edit an existing one.

2

Go to the "Scenario" Tab

In the endpoint editor, click on the "Scenario" tab in the left sidebar.

3

Select a Scenario

Choose the scenario you want to bind from the dropdown. A link icon lets you jump to the scenario editor.

4

Configure State Responses

Add a response for each state. Each state response has its own status code, body, headers, and delay.

5

Enable "Advance State"

Check this option if you want the scenario to automatically move to the next state after this endpoint responds.

Multiple Endpoints, One Scenario

You can bind multiple HTTP endpoints to the same scenario. They all share the same state:

Scenario: "Order Flow" (current state: "Confirmed")

GET /orders/123 β†’ Returns "Confirmed" state response

POST /orders/123/ship β†’ Returns "Confirmed" response, advances to "Shipped"

GET /orders/123 β†’ Now returns "Shipped" state response

Multiple Endpoints Mapped to the Same State

This is crucial: Multiple endpoints can have responses configured for the same state. This is essential for read-only endpoints and branching flows.

Scenario: "Order Flow" - State: "Confirmed"

All these endpoints respond to "Confirmed" state:

GET /orders/123 β†’ Read order (Advance OFF)

GET /orders/123/details β†’ Read details (Advance OFF)

GET /orders/123/invoice β†’ Read invoice (Advance OFF)

POST /orders/123/ship β†’ Ship order (Advance ON β†’ Shipped)

POST /orders/123/cancel β†’ Cancel order (Advance ON β†’ Cancelled)

All 5 endpoints are bound to "Order Flow" and have responses for "Confirmed" state. GET endpoints read without advancing. POST endpoints trigger different transitions.

Real Example: Authentication Flow

Test your app's protected pages. Profile and Dashboard are only accessible when logged in:

Scenario: "Auth Flow"

States: LoggedOut β†’ LoggedIn

── "LoggedOut" state responses ──

GET /api/profile β†’ 401 {"error": "Unauthorized"}

GET /api/dashboard β†’ 401 {"error": "Unauthorized"}

GET /api/settings β†’ 401 {"error": "Unauthorized"}

POST /api/login β†’ 200 {"token": "..."} (Advance ON β†’ LoggedIn)

── "LoggedIn" state responses ──

GET /api/profile β†’ 200 {"name": "John", "email": "..."}

GET /api/dashboard β†’ 200 {"stats": {...}}

GET /api/settings β†’ 200 {"theme": "dark", ...}

POST /api/logout β†’ 200 {"message": "Bye"} (Advance ON β†’ LoggedOut)

Result: Before login, all protected endpoints return 401. After login, they return real data. Multiple endpoints share the same state and respond accordingly.

State Responses Configuration

Each state response defines what the endpoint returns when the scenario is in that particular state.

State Selection

Choose which state this response is for. Only states defined in the bound scenario are available.

HTTP Status Code

Each state can return a different status code. Common patterns:

Initial state β†’ 202 Accepted (processing)

Error state β†’ 400 Bad Request

Success state β†’ 200 OK

Response Body

JSON body for this state. Use the Monaco editor for syntax highlighting and validation.

State: "Shipped"

{
  "orderId": "123",
  "status": "shipped",
  "trackingNumber": "TRK-789456",
  "carrier": "FedEx",
  "estimatedDelivery": "2024-01-15"
}
Response Headers

Custom headers for this state. Useful for returning different cache headers, status indicators, or API-specific headers per state.

Delay

Response delay in milliseconds. Different states can have different delays to simulate real processing times.

Advance State Control

One of the most powerful features of State Machine is the "Advance State" checkbox. This controls whether hitting an endpoint advances the scenario to the next state or keeps it at the current state.

Advance State: ON

The endpoint returns the response AND advances the scenario to the next state. Next request will see a different response.

Scenario: Order Flow

GET /orders/123 (state: Pending)

β†’ Returns "pending" response

β†’ Advances to "Confirmed"

GET /orders/123 (state: Confirmed)

β†’ Returns "confirmed" response

Use: Each request moves the workflow forward

Advance State: OFF

The endpoint returns the response WITHOUT advancing the scenario. The state stays the same - perfect for polling!

Scenario: Job Status

GET /jobs/123 (state: Processing)

β†’ Returns "processing" response

β†’ State stays at "Processing"

GET /jobs/123 (state: Processing)

β†’ Returns same "processing" response

β†’ State STILL stays at "Processing"

Use: Poll same state multiple times before advancing

Real-World Example: Job Polling

You're testing a long-running job. Your frontend polls every 2 seconds. You want to see "processing" 5 times before it finally shows "completed".

Setup:

β€’ Scenario: "Job Status" with states: Queued β†’ Processing β†’ Completed

β€’ GET /jobs/123 bound to scenario

For "Processing" state: Advance State = OFF

↳ Your app can poll 10 times and still get "processing"

↳ Manually advance when ready to test "completed"

Key Insight

Advance State gives you fine-grained control over when state changes. Enable it for endpoints that should trigger transitions (like POST /order/confirm). Disable it for endpoints that just read state (like GET /order/status) - so your app can poll the same status repeatedly until you're ready to advance.

Linear vs Branching Flows

Mocklantis supports two types of state advancement: linear chains and branching flows.

LINEARAuto-advance

Each state has exactly one outgoing transition. The scenario automatically advances to the next state after the endpoint responds.

Initial β†’ Confirmed β†’ Shipped β†’ Delivered

Each state has ONE next state.
"Advance State" checkbox: enabled
No need to specify nextState.

Use case: Simple sequential workflows, polling for job status

BRANCHINGExplicit next state

A state has multiple outgoing transitions. You must specify which state to transition to in each state response's "Next State" field.

         β”Œβ†’ Approved
Pending ──
         β””β†’ Rejected

State "Pending" has TWO next states.
Each state response must specify:
  nextState: "Approved" or "Rejected"

Use case: Decision points, approval flows, payment success/failure

Warning: If a state has multiple outgoing transitions but you don't specify a "Next State" in the response, the scenario will stay in the current state. The UI will show a warning to help you catch this.

How State Advancement Works


    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚                          STATE ADVANCEMENT FLOW                                      β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

                                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                    β”‚ Request arrives β”‚
                                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                             β”‚
                                             β–Ό
                                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                β”‚ Match HTTP Endpoint     β”‚
                                β”‚ (bound to scenario)     β”‚
                                β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                             β”‚
                                             β–Ό
                                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                β”‚ Read current state      β”‚
                                β”‚ from scenario           β”‚
                                β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                             β”‚
                                             β–Ό
                                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                β”‚ Find state response     β”‚
                                β”‚ matching current state  β”‚
                                β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                             β”‚
                                             β–Ό
                                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                β”‚ Return response         β”‚
                                β”‚ (status, body, headers) β”‚
                                β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                             β”‚
                                             β–Ό
                                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                β”‚ "Advance State"         β”‚
                                β”‚ checkbox enabled?       β”‚
                                β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                             β”‚
                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                              β”‚                             β”‚
                             YES                            NO
                              β”‚                             β”‚
                              β–Ό                             β–Ό
                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                β”‚ "Next State" specified  β”‚     β”‚ Stay in current state   β”‚
                β”‚ in state response?      β”‚     β”‚ (no advancement)        β”‚
                β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                             β”‚
               β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
               β”‚                           β”‚
              YES                          NO
               β”‚                           β”‚
               β–Ό                           β–Ό
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚ Transition to       β”‚    β”‚ Count outgoing transitions  β”‚
    β”‚ specified nextState β”‚    β”‚ from current state          β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                              β”‚
                                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                 β”‚                         β”‚
                              1 transition            0 or 2+ transitions
                                 β”‚                         β”‚
                                 β–Ό                         β–Ό
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β”‚ Auto-advance to     β”‚    β”‚ Stay in current state   β”‚
                    β”‚ that single state   β”‚    β”‚ (cannot auto-decide)    β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Key Points
  • β€’ State is read fresh on every request (no restart needed)
  • β€’ Response is sent first, then state advances
  • β€’ Linear flows auto-advance when there's exactly one outgoing transition
  • β€’ Branching flows require explicit "Next State" in state response
  • β€’ If no matching state response exists, endpoint returns its default response

Real-Time State Display

Mocklantis shows you the current state of your scenarios in real-time. No need to refresh or poll - the UI updates instantly when state changes.

Current State Indicator

The scenario detail page shows the current state at the top. It updates instantly as requests come in.

Current State:Shipped
Graph Highlight

The flow graph highlights the current state node in green with a pulsing indicator. Edges from the current state are animated.

Shipped
β†’ Active state with pulse

How It Works: Mocklantis uses an event-driven system (Tokio broadcast channels) to push state changes to the UI instantly. No polling, no delays - just instant updates.

Real-World Example Scenarios

1. E-Commerce Order Lifecycle

Test your order tracking UI through all possible states.

Scenario: "Order Flow"

States: Initial β†’ Confirmed β†’ Shipped β†’ Delivered

GET /orders/123 (Initial)

{ "status": "pending", "canCancel": true }

GET /orders/123 (Confirmed)

{ "status": "confirmed", "estimatedShipping": "2024-01-10" }

GET /orders/123 (Shipped)

{ "status": "shipped", "trackingNumber": "TRK-123" }

GET /orders/123 (Delivered)

{ "status": "delivered", "deliveredAt": "2024-01-12" }

Flow type: Linear - auto-advances through states

2. Payment Processing with Success/Failure

Test both success and failure paths of payment processing.

Scenario: "Payment Flow"

States: Initial β†’ Processing β†’ (Success | Failed)

POST /payments (Initial β†’ Processing)

Returns: 202 Accepted, advances to "Processing"

GET /payments/123 (Processing)

{ "status": "processing" }

nextState: "Success" or "Failed" (explicit)

GET /payments/123 (Success)

{ "status": "completed", "transactionId": "..." }

GET /payments/123 (Failed)

{ "status": "failed", "error": "Insufficient funds" }

Flow type: Branching - Processing has two possible next states

3. User Registration with Email Verification

Multi-step registration with email verification flow.

Scenario: "Registration Flow"

States: Initial β†’ PendingVerification β†’ Verified

POST /auth/register (Initial β†’ PendingVerification)

{ "message": "Check your email for verification link" }

GET /auth/me (PendingVerification)

{ "email": "[email protected]", "verified": false }

POST /auth/verify?token=abc (PendingVerification β†’ Verified)

{ "message": "Email verified successfully" }

GET /auth/me (Verified)

{ "email": "[email protected]", "verified": true }

Flow type: Linear - straightforward registration flow

4. Document Approval Workflow

Multi-level approval with multiple decision points.

Scenario: "Approval Flow"

States:
                   β”Œβ†’ ManagerApproved ─┬→ DirectorApproved β†’ Finalized
Draft β†’ Submitted ──                   β””β†’ DirectorRejected
                   β””β†’ ManagerRejected

Each approval endpoint specifies nextState based on action:

POST /docs/123/approve (by manager) β†’ nextState: "ManagerApproved"

POST /docs/123/reject (by manager) β†’ nextState: "ManagerRejected"

Flow type: Branching - multiple decision points

5. Async Job Processing (with Polling)

Long-running job with controlled polling - demonstrates Advance State OFF.

Scenario: "Job Flow"

States: Initial β†’ Queued β†’ Running β†’ Completed

POST /jobs (create job, Initial β†’ Queued)

{ "jobId": "job-123", "status": "queued" }

GET /jobs/job-123 (Queued) - Advance State: OFF

{ "jobId": "job-123", "status": "queued", "position": 3 }

↳ Can poll multiple times, always returns "queued"

GET /jobs/job-123 (Running) - Advance State: OFF

{ "jobId": "job-123", "status": "running", "progress": 50 }

↳ Manually advance when ready to complete

GET /jobs/job-123 (Completed)

{ "jobId": "job-123", "status": "completed", "result": {...} }

Flow type: Linear with Advance State OFF - controlled progression

Resetting a Scenario

When you need to start over or run tests again, you can reset a scenario back to its "Initial" state.

Manual Reset

Open the scenario and click the "Reset" button. The scenario immediately returns to the "Initial" state.

Server Restart

Stopping and starting the server also resets all scenarios to their "Initial" state. This ensures clean test runs.

Tip: Reset scenarios between test runs to ensure consistent behavior. Each test should start from a known state.

Tips and Best Practices

1.

Use descriptive state names

Name states based on the business status: "PendingApproval", "PaymentReceived", "ShippedToCustomer" rather than "State1", "State2".

2.

One scenario per workflow

Create separate scenarios for different workflows: "OrderFlow", "PaymentFlow", "UserRegistration". Don't try to combine unrelated flows.

3.

Test both paths in branching flows

For branching scenarios (success/failure), make sure to test all paths. Reset the scenario between tests to start fresh.

4.

Use the visual graph

The flow graph isn't just for show - it helps you understand complex flows at a glance and catch missing transitions.

5.

Watch the current state

Keep the scenario page open while testing. Watch the current state update in real-time as requests come in - great for debugging.

6.

Use Advance State OFF for polling endpoints

For GET endpoints that check status, disable Advance State so your app can poll the same state multiple times. Enable it only for action endpoints (POST, PUT).

7.

Combine with delays

Add realistic delays to state responses to simulate processing time. "Processing" state might have 2s delay while "Completed" is instant.

Webhooks

Webhooks allow you to automatically trigger HTTP requests to external services when your mock endpoints are called. This enables you to simulate real-world integrations, test notification systems, and create realistic async workflows.

How it works:

Client Request β†’ Mock Response Sent β†’ Webhook Triggered (async)
                        ↓                       ↓
                   Immediate              External Service
                   Response               (Slack, Payment, etc.)

Key Features:

  • β€’ Fire-and-Forget: Webhooks execute asynchronously after the mock response is sent
  • β€’ Fully Isolated: Webhook failures never affect your mock endpoint responses
  • β€’ Template Variables: Extract values from the original request for your webhook payload

Creating a Webhook

πŸ“ How to Create:

  1. Click the dropdown arrow β–Ό next to the "+ New Endpoint" button in the header
  2. Select "Webhook" from the dropdown menu
  3. Give your webhook a descriptive name
  4. Configure the HTTP method and target URL
  5. Add headers and request body as needed
  6. Bind the webhook to one or more endpoints

⚠️ Important:

A webhook will only fire when it's bound to at least one endpoint. Unbound webhooks exist in your configuration but never execute.

Binding Webhooks to Endpoints

There are two ways to create a binding between a webhook and an endpoint:

Method 1: From the HTTP Endpoint (Hooks Tab)

  1. Select your HTTP endpoint in the sidebar
  2. Go to the "Hooks" tab in the body section
  3. You'll see a list of all available webhooks
  4. Click on a webhook to toggle its binding

Tip: Use the search box to filter webhooks. Click the external link icon to jump to webhook settings.

Method 2: From the Webhook Settings

  1. Select your webhook in the sidebar (orange "HOOK" badge)
  2. Find the "Trigger Endpoints" dropdown
  3. Search and select endpoints to bind
  4. One webhook can be bound to multiple endpoints

Tip: This method is useful when binding the same webhook to many endpoints at once.

Webhook Configuration

Request Settings

SettingDescription
MethodHTTP method (POST, PUT, GET, DELETE, PATCH, HEAD, OPTIONS)
URLTarget URL (http:// or https://)
HeadersCustom request headers (key-value pairs)
BodyRequest body with template variable support

Advanced Settings

SettingDefaultRangeDescription
Delay0 ms0-60000Wait time before sending
Timeout30000 ms1000-120000Max wait for response
Retry Count00-10Retry attempts on failure
Retry Delay1000 ms100-30000Wait between retries
Skip SSL Verifyfalse-Ignore SSL cert errors

Authentication

Mocklantis supports 4 authentication types:

None

No authentication headers. Use for public webhooks.

Basic Auth

Username & password encoded as Base64.

Authorization: Basic dXNlcjpwYXNz

Bearer Token

JWT or OAuth token in Authorization header.

Authorization: Bearer eyJhbGc...

API Key

Custom header or query parameter.

X-API-Key: your-api-key-here

Template Variables

Extract values from the original HTTP request and include them in your webhook payload using {{category.key}} syntax.

VariableExampleDescription
request.path.*{{request.path.id}}Path parameter (e.g., /users/:id)
request.query.*{{request.query.page}}Query parameter (e.g., ?page=1)
request.header.*{{request.header.X-User-Id}}Request header value
request.body{{request.body}}Entire request body as-is
request.body.*{{request.body.user.email}}JSONPath extraction from body
request.method{{request.method}}HTTP method (GET, POST, etc.)
request.url{{request.url}}Full URL path with query string
request.timestamp{{request.timestamp}}ISO-8601 timestamp
random.uuid{{random.uuid}}Random UUID v4
random.number{{random.number(1,100)}}Random number in range

JSONPath for Body Extraction:

{{request.body.user.name}} β†’ Nested object: body.user.name
{{request.body.items[0]}} β†’ Array index: first item
{{request.body.items[0].price}} β†’ Nested in array: first item's price

Example webhook body:

{
  "event": "order_created",
  "timestamp": "{{request.timestamp}}",
  "data": {
    "orderId": "{{request.path.id}}",
    "customer": "{{request.body.customer.name}}",
    "total": {{request.body.total}}
  }
}

Real-World Use Cases

Sports: Live Match Score Updates

Simulate a sports API that notifies subscribers when match scores change. Perfect for testing real-time sports apps.

WEBHOOK CONFIG
Name: Score Update Notification
Method: POST
URL: https://your-app.com/webhooks/score
Bound to: PUT /api/matches/:matchId/score
USE CASES
  • β€’ Push notifications to mobile apps
  • β€’ Update live scoreboards
  • β€’ Trigger betting system updates
  • β€’ Feed social media bots
REQUEST BODY
{
  "type": "score_update",
  "matchId": "{{request.path.matchId}}",
  "homeTeam": "{{request.body.homeTeam}}",
  "awayTeam": "{{request.body.awayTeam}}",
  "homeScore": {{request.body.homeScore}},
  "awayScore": {{request.body.awayScore}},
  "minute": {{request.body.minute}},
  "eventType": "{{request.body.eventType}}",
  "timestamp": "{{request.timestamp}}"
}

Stock Market: Price Alerts

Simulate a trading platform that sends alerts when stock prices hit certain thresholds.

WEBHOOK CONFIG
Name: Stock Price Alert
Method: POST
URL: https://trading-app.com/alerts
Auth: API Key (X-Trading-Key)
Bound to: POST /api/stocks/:symbol/price
USE CASES
  • β€’ Price threshold alerts
  • β€’ Portfolio rebalancing triggers
  • β€’ Market volatility notifications
  • β€’ Automated trading signals
REQUEST BODY
{
  "alert": "price_threshold",
  "symbol": "{{request.path.symbol}}",
  "currentPrice": {{request.body.price}},
  "change": {{request.body.changePercent}},
  "volume": {{request.body.volume}},
  "timestamp": "{{request.timestamp}}",
  "alertId": "{{random.uuid}}"
}

AI/ML: Model Inference Callbacks

Simulate async AI/ML processing where results are delivered via callback after model inference completes.

WEBHOOK CONFIG
Name: ML Inference Callback
Method: POST
URL: https://app.com/ml/callback
Delay: 2000ms (simulate processing)
Auth: Bearer Token
Bound to: POST /api/ml/jobs/:jobId
USE CASES
  • β€’ Image classification results
  • β€’ NLP processing completion
  • β€’ Recommendation engine outputs
  • β€’ Batch prediction results
REQUEST BODY
{
  "jobId": "{{request.path.jobId}}",
  "status": "completed",
  "model": "{{request.body.model}}",
  "result": {
    "prediction": {{request.body.prediction}},
    "confidence": {{request.body.confidence}},
    "processingTime": {{request.body.processingTimeMs}}
  },
  "callbackId": "{{random.uuid}}",
  "completedAt": "{{request.timestamp}}"
}

Payment Processing: Transaction Callbacks

Simulate payment gateway callbacks (like Stripe, PayPal) that notify your app when transactions complete.

WEBHOOK CONFIG
Name: Payment Completed
Method: POST
URL: http://localhost:3000/webhooks/payment
Delay: 1500ms
Retry: 3 attempts, 2000ms delay
Bound to: POST /api/payments/charge
USE CASES
  • β€’ Order fulfillment triggers
  • β€’ Subscription activation
  • β€’ Receipt generation
  • β€’ Inventory updates
REQUEST BODY
{
  "event": "payment.completed",
  "transactionId": "{{random.uuid}}",
  "orderId": "{{request.body.orderId}}",
  "amount": {{request.body.amount}},
  "currency": "{{request.body.currency}}",
  "paymentMethod": "{{request.body.paymentMethod}}",
  "status": "success",
  "processedAt": "{{request.timestamp}}",
  "metadata": {
    "customerId": "{{request.body.customerId}}",
    "description": "{{request.body.description}}"
  }
}

E-commerce: Order Shipping Notifications

Simulate shipping carrier webhooks that notify customers when orders are shipped with tracking info.

WEBHOOK CONFIG
Name: Order Shipped Notification
Method: POST
URL: https://shop.com/webhooks/shipping
Bound to: PUT /api/orders/:orderId/ship
USE CASES
  • β€’ Customer email notifications
  • β€’ SMS tracking updates
  • β€’ Order status page updates
  • β€’ Delivery estimate calculations
REQUEST BODY
{
  "event": "order.shipped",
  "orderId": "{{request.path.orderId}}",
  "trackingNumber": "{{random.uuid}}",
  "carrier": "{{request.body.carrier}}",
  "estimatedDelivery": "{{request.body.estimatedDelivery}}",
  "items": {{request.body.items}},
  "shippingAddress": {{request.body.shippingAddress}},
  "notifiedAt": "{{request.timestamp}}"
}

IoT: Sensor Alerts

Simulate IoT device webhooks that alert when sensor readings exceed thresholds.

WEBHOOK CONFIG
Name: Sensor Alert
Method: POST
URL: https://iot-platform.com/alerts
Auth: API Key
Bound to: POST /api/devices/:deviceId/reading
USE CASES
  • β€’ Temperature threshold alerts
  • β€’ Motion detection notifications
  • β€’ Equipment malfunction warnings
  • β€’ Environmental monitoring
REQUEST BODY
{
  "deviceId": "{{request.path.deviceId}}",
  "sensorType": "{{request.body.sensorType}}",
  "reading": {
    "value": {{request.body.value}},
    "unit": "{{request.body.unit}}",
    "threshold": {{request.body.threshold}}
  },
  "alertLevel": "{{request.body.alertLevel}}",
  "location": {
    "lat": {{request.body.latitude}},
    "lng": {{request.body.longitude}}
  },
  "timestamp": "{{request.timestamp}}"
}

Slack: Team Notifications

Send formatted messages to Slack channels when important events occur.

WEBHOOK CONFIG
Name: Slack Order Alert
Method: POST
URL: https://hooks.slack.com/services/T.../B.../xxx
Headers: Content-Type: application/json
Bound to: POST /api/orders/:orderId
USE CASES
  • β€’ New order notifications
  • β€’ Error alerts to dev channels
  • β€’ Deployment status updates
  • β€’ Customer support tickets
REQUEST BODY (Slack Block Kit)
{
  "text": "New Order Received!",
  "blocks": [
    {
      "type": "section",
      "text": {
        "type": "mrkdwn",
        "text": "*Order #{{request.path.orderId}}*\nCustomer: {{request.body.customer.name}}\nTotal: ${{request.body.total}}"
      }
    }
  ]
}

Discord: Game/Community Updates

Send rich embed messages to Discord channels for community updates.

WEBHOOK CONFIG
Name: Discord Match Update
Method: POST
URL: https://discord.com/api/webhooks/...
Bound to: PUT /api/matches/:matchId
USE CASES
  • β€’ Game server status updates
  • β€’ Tournament bracket changes
  • β€’ Community event reminders
  • β€’ Leaderboard updates
REQUEST BODY (Discord Embed)
{
  "content": "Match Update!",
  "embeds": [{
    "title": "{{request.body.homeTeam}} vs {{request.body.awayTeam}}",
    "description": "Score: {{request.body.homeScore}} - {{request.body.awayScore}}",
    "color": 5814783,
    "fields": [
      {"name": "Minute", "value": "{{request.body.minute}}'", "inline": true},
      {"name": "Event", "value": "{{request.body.eventType}}", "inline": true}
    ],
    "timestamp": "{{request.timestamp}}"
  }]
}

Microservices: Inter-Service Communication

Simulate event-driven architecture where services communicate via webhooks.

WEBHOOK CONFIG
Name: Inventory Reserve
Method: POST
URL: http://inventory-service:8080/reserve
Delay: 100ms
Bound to: POST /api/orders/:orderId
USE CASES
  • β€’ Order β†’ Inventory sync
  • β€’ User β†’ Notification service
  • β€’ Payment β†’ Fulfillment trigger
  • β€’ Analytics event streaming
REQUEST BODY
{
  "eventType": "inventory.reserve",
  "correlationId": "{{request.header.X-Correlation-Id}}",
  "orderId": "{{request.path.orderId}}",
  "items": {{request.body.items}},
  "warehouseId": "{{request.body.warehouseId}}",
  "priority": "{{request.body.priority}}",
  "requestedAt": "{{request.timestamp}}"
}

Audit Logging: Compliance & Security

Send all API activity to an audit service for compliance and security monitoring.

WEBHOOK CONFIG
Name: Audit Logger
Method: POST
URL: https://audit.internal/logs
Auth: API Key (X-Audit-Key)
Retry: 5 attempts (critical data)
Bound to: Multiple sensitive endpoints
USE CASES
  • β€’ GDPR/HIPAA compliance logging
  • β€’ Security incident tracking
  • β€’ User activity monitoring
  • β€’ Data change history
REQUEST BODY
{
  "eventId": "{{random.uuid}}",
  "timestamp": "{{request.timestamp}}",
  "action": "{{request.body.action}}",
  "resource": "{{request.body.resource}}",
  "resourceId": "{{request.path.id}}",
  "userId": "{{request.header.X-User-Id}}",
  "ipAddress": "{{request.header.X-Forwarded-For}}",
  "userAgent": "{{request.header.User-Agent}}",
  "changes": {{request.body.changes}}
}

Social Media: Activity Notifications

Simulate social platform webhooks for likes, follows, comments, and other interactions.

WEBHOOK CONFIG
Name: New Follower Alert
Method: POST
URL: https://app.com/webhooks/social
Bound to: POST /api/users/:userId/follow
USE CASES
  • β€’ Push notifications for new followers
  • β€’ Like/comment activity feeds
  • β€’ Milestone achievement alerts
  • β€’ Creator dashboard updates
REQUEST BODY
{
  "event": "new_follower",
  "userId": "{{request.path.userId}}",
  "follower": {
    "id": "{{request.body.followerId}}",
    "username": "{{request.body.followerUsername}}",
    "displayName": "{{request.body.followerDisplayName}}"
  },
  "totalFollowers": {{request.body.totalFollowers}},
  "timestamp": "{{request.timestamp}}",
  "notificationId": "{{random.uuid}}"
}

Webhook History & Monitoring

Every webhook execution is logged and visible in the Response tab of your webhook. Monitor, debug, and verify that your webhooks are working correctly in real-time.

What's captured in history:

βœ“ Request details (method, URL, headers, body sent)
βœ“ Response details (status code, headers, body received)
βœ“ Timing (duration in milliseconds)
βœ“ Trigger info (which endpoint triggered it)
βœ“ Retry attempts (attempt number and max attempts)
βœ“ Status (success/failure with error messages)

Real-time updates: Webhook history updates instantly via Server-Sent Events (SSE). Results appear automatically as webhooks execute - no refresh needed!

Testing Webhooks

Mocklantis provides a "Try" button to manually test webhooks without triggering them from an endpoint.

πŸ“ How to Test:

  1. Select your webhook in the sidebar
  2. Click the "Try" button in the header
  3. The webhook executes immediately
  4. Check the Response tab for results

⚠️ Note:

When testing manually, template variables like {{request.body.field}} won't have real values since there's no triggering request. Use {{random.uuid}} or hardcoded test values for manual testing.

Retry Logic

Configure automatic retries for webhooks that fail due to network issues or temporary service outages.

How retries work:

Attempt 1: Failed (timeout)
    ↓ Wait retryDelay (1000ms)
Attempt 2: Failed (500 error)
    ↓ Wait retryDelay (1000ms)
Attempt 3: Success! (200 OK)

βœ… Success Criteria

HTTP status 200-299 is considered successful. Retries stop immediately on success.

❌ Failure Criteria

Non-2xx status codes, timeouts, or connection errors trigger retries until max attempts reached.

Troubleshooting

Webhook not firing?

  • Check if the webhook is bound to the endpoint you're calling
  • Verify the mock server is running
  • Check the webhook URL is valid (http:// or https://)
  • Look for any validation errors in the webhook configuration

Getting timeout errors?

  • Increase the timeout value in Settings tab
  • Check if the target URL is reachable from your machine
  • Verify no firewall or VPN is blocking the connection
  • Try the URL in a browser or with curl first

SSL certificate errors?

  • Enable "Skip SSL Verification" in Settings tab for self-signed certs
  • This is common in development/staging environments
  • Don't use this option when connecting to production services

Template variables not working?

  • Check the syntax: {{request.path.id}} not {{path.id}}
  • Include the request. prefix for request data
  • Ensure the triggering request has the expected data
  • For body JSONPath, verify the path exists in the request body
  • Check for typos in variable names (case-sensitive for body paths)

History not showing results?

  • Select the webhook in the sidebar to see its history
  • History is stored per-webhook, not globally
  • Results update in real-time via SSE - check your network connection
  • History persists during the session but clears when you close Mocklantis

Authentication not working?

  • For Basic Auth, ensure username and password are both provided
  • For Bearer Token, check the token doesn't have extra whitespace
  • For API Key, verify the header name matches what the service expects
  • Check the Response tab to see what headers were actually sent

Summary

Webhooks are perfect for:

  • βœ“ Testing notification systems (Slack, Discord, Email)
  • βœ“ Simulating payment gateway callbacks
  • βœ“ Mocking inter-service communication
  • βœ“ Testing async/event-driven workflows
  • βœ“ Audit logging simulation
  • βœ“ IoT device event handling
  • βœ“ AI/ML job completion callbacks
  • βœ“ Real-time score/data updates

Key features:

  • βœ“ Fire-and-forget execution
  • βœ“ Template variable support
  • βœ“ Multiple authentication methods
  • βœ“ Configurable retry logic
  • βœ“ Real-time history monitoring
  • βœ“ Delay simulation for realism
  • βœ“ SSL verification control
  • βœ“ Multi-endpoint binding

Tips & Best Practices

πŸ’‘

Use descriptive names so you know what webhooks do at a glance. Example: "Slack Notification - Order Created" instead of "Webhook 1"

πŸ’‘

Set appropriate timeouts - 30 seconds is usually plenty for most external services

πŸ’‘

Add delays (100-2000ms) to simulate real-world latency from payment processors and external services

πŸ’‘

Configure retries (3-5) for critical webhooks like audit logging that must succeed

⚠️

Double-check webhook URLs before testing - don't accidentally send test data to production systems!

⚠️

Credentials are stored locally - don't commit sensitive data to version control

🎯

Use the "Try" button to test webhooks without triggering from an endpoint

🎯

Check the Response tab to debug template variable issues and see what was actually sent

WebSocket Endpoints

Simulate and test real-time WebSocket connections with Mocklantis.

Quick Start

Creating a WebSocket endpoint takes just 3 steps:

  1. Set the path: /ws/chat
  2. Choose a mode: Conversational, Streaming, or Triggered Streaming
  3. Configure messages: Add patterns and responses

Path Configuration

Define the access path for your WebSocket endpoint. The path must always start with /.

Example Paths:

βœ“/ws/chat- Simple chat WebSocket
βœ“/api/v1/notifications- Notification stream
βœ“/live/stock-prices- Live stock data

πŸ’‘ Tip: You can change the path while the server is running, active connections are preserved!

Mode Selection

Mocklantis offers 3 different WebSocket modes. Each mode is designed for different use cases:

πŸ’¬

Conversational Mode

Responds to each incoming message based on specific patterns.

When to use: Chat applications, command-based systems, Q&A bots

Example: User sends "hello" β†’ Responds with "Hi there!"

πŸ“‘

Streaming Mode

Automatically sends periodic messages when a connection is established.

When to use: Live data feeds, sensor data, real-time analytics

Example: Current stock prices every 1 second

🎯

Triggered Streaming Mode

Starts a stream when a specific message is received, sends an initial response, then sends periodic messages.

When to use: On-demand data streams, progressive loading, long-running operations

Example: "start" message β†’ Initial response β†’ Progress updates every 2s

Connection Example

Once your server is running, you can connect to your WebSocket endpoint like this:

JavaScript:

const ws = new WebSocket('ws://localhost:5678/ws/chat');

ws.onopen = () => {
    console.log('Connected!');
    ws.send('Hello server!');
};

ws.onmessage = (event) => {
    console.log('Received:', event.data);
};

Python:

import websocket

ws = websocket.create_connection('ws://localhost:5678/ws/chat')
ws.send('Hello server!')
result = ws.recv()
print(f'Received: {result}')

Next Steps

Explore Modes in Detail

Learn the detailed features and examples of each mode

Lifecycle Events

Automatic message sending on connection

Advanced Options

Advanced configurations and customizations

Tips & Best Practices

βœ…

Edit while server is running: When you change endpoint configuration, there's no need to restart the server, changes apply instantly!

πŸ’‘

Test your patterns: In Conversational and Triggered Streaming modes, test your patterns with simple messages.

⚑

Performance: Don't set streaming interval too low (minimum 100ms recommended), otherwise clients may be affected by message load.

πŸ’¬ Conversational Mode

Intelligently respond to every incoming message based on patterns. Perfect for chat bots, command systems, and interactive applications!

How Does It Work?

In Conversational Mode, when a client sends a message, Mocklantis checks the pattern list from top to bottom. It sends the response of the first matching pattern.

Workflow:
1Client sends a message
2Mocklantis checks patterns in order
3Sends the response of the first matching pattern
4If no match is found, stays silent

Pattern Types

Mocklantis offers 4 powerful pattern matching types. Each is optimized for different scenarios:

🎯
EXACT Match - Exact Matching

The incoming message must be exactly the same as the pattern. Case-sensitive and whitespace matters.

Pattern:

hello

Results:

βœ“helloβ†’ Matches
βœ—Helloβ†’ No match (uppercase)
βœ—hello worldβ†’ No match (extra words)

πŸ’‘ When to use:

  • Command-based systems: /help, /start
  • Fixed API commands: PING, STATUS
  • Specific event names: user:login

Real-Life Example:

Pattern:

/balance

Response:

{
  "balance": 1250.50,
  "currency": "USD"
}
πŸ”
CONTAINS Match - Contains

Matches if the incoming message contains the pattern. The most flexible matching type.

Pattern:

error

Results:

βœ“errorβ†’ Matches
βœ“connection errorβ†’ Matches
βœ“There was an error in processingβ†’ Matches
βœ—Errorβ†’ No match (case-sensitive)

πŸ’‘ When to use:

  • Keyword-based responses: messages containing "help"
  • Natural language processing: messages containing "hello", "hi"
  • Error catching: messages containing "error", "failed"

Real-Life Example - Chat Bot:

Pattern:

price

Response:

Our premium plan is $29/month. Type "subscribe" to get started!

This pattern catches these messages:

  • "What's the price?"
  • "price info please"
  • "Tell me about pricing"
⚑
REGEX Match - Regular Expression

Use regex for advanced pattern matching. The most powerful and flexible option.

Example 1: Email Validation

Pattern: ^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$
βœ—invalid-email

Example 2: Phone Number

Pattern: ^\d{10}$
βœ“5551234567
βœ—555-123-4567

Example 3: Command Parameters

Pattern: /send (.*) to (.*)
βœ“/send money to john
βœ“/send file to team

πŸ’‘ When to use:

  • Format validation: email, phone, credit card
  • Parameterized commands: /order [product] qty:[number]
  • Complex patterns: code snippets, URLs
  • Case-insensitive match: (?i)hello

Real-Life Example - Order System:

Pattern:

^order #(\d+)$

Response:

{
  "orderId": "12345",
  "status": "shipped",
  "trackingNumber": "ABC123XYZ"
}
Matches: order #12345, order #999

⚠️ Warning: Regex syntax errors will prevent the pattern from working. Test it!

πŸ“¦
JSON_PATH Match - JSON Path Queries

Check specific fields in JSON messages. Ideal for modern APIs!

Example 1: Root Level Field

Pattern: $.type
Incoming message:
{
  "type": "message",
  "content": "Hello"
}
βœ“Matches! (type field exists)

Example 2: Nested Object

Pattern: $.user.profile.role
Incoming message:
{
  "user": {
    "profile": {
      "role": "admin"
    }
  }
}
βœ“Matches! ($.user.profile.role exists)

Example 3: Array Element

Pattern: $.users[0].name
Incoming message:
{
  "users": [
    {"name": "Alice", "age": 30},
    {"name": "Bob", "age": 25}
  ]
}
βœ“Matches! (First user's name exists)

Example 4: Nested in Array

Pattern: $.data.items[1].status
Incoming message:
{
  "data": {
    "items": [
      {"id": 101, "status": "active"},
      {"id": 102, "status": "pending"}
    ]
  }
}
βœ“Matches! (Second item's status exists)

πŸ’‘ When to use:

  • Event type routing: respond based on event type via $.event
  • Auth checks: check if $.auth.token exists
  • Nested data: $.order.payment.method
  • Array operations: $.cart.items[0].id

Real-Life Example - Event Router:

Pattern 1: User Login Event

$.event

Message: {"event":"login","user":"john"}

Response:

{
  "status": "success",
  "message": "Welcome back!",
  "sessionId": "abc123"
}

πŸ’‘ Pro Tip: JSON Path only checks for field existence, not its value. Use REGEX for value checking.

Response Templating

Use {{request.message}} to reference the incoming message in your response. For JSON messages, you can access specific fields using dot notation.

πŸ”„
Echo & Transform Messages

Example 1: Full Message Echo

Incoming:

{"action":"login","user":"john"}

Response Template:

{
  "type": "echo",
  "received": {{request.message}}
}

Actual Response:

{
  "type": "echo",
  "received": {"action":"login","user":"john"}
}

Example 2: Extract Specific Fields

Incoming:

{"action":"subscribe","channel":"news","userId":"123"}

Response Template:

{
  "status": "subscribed",
  "channel": "{{request.message.channel}}",
  "user": "{{request.message.userId}}"
}

Actual Response:

{
  "status": "subscribed",
  "channel": "news",
  "user": "123"
}

Example 3: Nested JSON Access

Incoming:

{
  "event": "order",
  "data": { "product": "Widget", "quantity": 5 }
}

Response Template:

{
  "confirmed": true,
  "product": "{{request.message.data.product}}",
  "qty": "{{request.message.data.quantity}}"
}

Actual Response:

{
  "confirmed": true,
  "product": "Widget",
  "qty": "5"
}

πŸ’‘ Available Variables:

  • {{request.message}} - Full incoming message
  • {{request.message.field}} - Specific JSON field
  • {{request.message.nested.field}} - Nested field access
  • {{request.timestamp}} - Current ISO timestamp
  • {{random.uuid}} - Random UUID (and other random variables)

Response Configuration

Response Delay

You can add an optional delay (in milliseconds) for each pattern.

Delay: 0ms

β†’ Instant response (default)

Delay: 1000ms

β†’ Response after 1 second

Delay: 3000ms

β†’ Response after 3 seconds (slow network simulation)

πŸ’‘ Use Cases:

  • Network latency simulation
  • Slow server testing (timeout tests)
  • Progressive loading UX testing
  • Realistic delay for "Thinking..." animations
Response Format

Response can be plain text or JSON. Mocklantis automatically determines content-type.

Plain Text:

Hello! How can I help you?

JSON:

{
  "message": "Hello!",
  "timestamp": 1234567890,
  "userId": "user123"
}

Pattern Priority

Patterns are checked from top to bottom. The first matching pattern wins!

⚠️ Important: Ordering Strategy

  • Put specific patterns at the top
  • Put general patterns at the bottom (as fallback)
  • CONTAINS type is the most general, use carefully

Bad Example ❌:

1.
help(CONTAINS)
2.
/help balance(EXACT)

β†’ Pattern 1 catches EVERY message containing "help", Pattern 2 never gets reached!

Good Example βœ…:

1.
/help balance(EXACT)
2.
/help order(EXACT)
3.
help(CONTAINS - fallback)

β†’ Specific commands are checked first, if no match, general help message is sent.

Complete Example: Support Bot

A real customer support bot scenario. Uses all pattern types:

Pattern #1 - GreetingCONTAINS

Pattern: hello

Response: Hi! I'm your support assistant. How can I help you today?

Delay: 0ms

Pattern #2 - Order StatusREGEX

Pattern: ^order #(\d+)$

Response:

{
  "orderId": "12345",
  "status": "shipped",
  "estimatedDelivery": "2024-01-20"
}

Delay: 500ms

Pattern #3 - Account BalanceEXACT

Pattern: /balance

Response:

{
  "balance": 1250.50,
  "currency": "USD",
  "lastUpdated": "2024-01-15T10:30:00Z"
}

Delay: 800ms

Pattern #4 - Event RoutingJSON_PATH

Pattern: $.action

Message: {"action":"subscribe","plan":"premium"}

Response:

{
  "status": "subscribed",
  "plan": "premium",
  "nextBillingDate": "2024-02-15"
}

Delay: 1000ms

Pattern #5 - FallbackCONTAINS

Pattern: . (any character)

Response: I didn't understand that. Type "help" for available commands.

Delay: 0ms

Best Practices

βœ…

Put specific patterns at the top

Check special cases first, then general cases.

βœ…

Use descriptive names for each pattern

Give names that describe the pattern's purpose: "Login Handler", "Order Status Query"

βœ…

Use realistic delays

Simulate your production environment's response times.

βœ…

Add a fallback pattern

Add a general CONTAINS pattern at the end to respond to unexpected messages.

❌

Don't use CONTAINS as the first pattern

Very general patterns can prevent other patterns from working.

❌

Don't use regex for simple matches

If EXACT or CONTAINS is sufficient, don't use regex for performance reasons.

πŸ“‘ Streaming Mode

Automatically send periodic messages when a connection is established. Perfect for real-time data feeds, sensor data, and live updates!

How Does It Work?

In Streaming Mode, as soon as the client connects to the WebSocket, Mocklantis automatically starts sending periodic messages. The stream continues even if the client doesn't send any messages.

Workflow:
1Client connects to WebSocket
2Mocklantis automatically starts the stream
3Messages are sent sequentially at the specified interval
4When the message list ends, it loops back to the beginning (circular)
5Continues until the client disconnects

Configuration

⏱️Streaming Interval

Wait time between messages (in milliseconds). This interval remains constant.

100ms→ Very fast (10 messages per second)
1000ms→ Normal (1 message per second)
5000ms→ Slow (1 message per 5 seconds)

⚠️ Performance Note: Don't set the interval too low (minimum 100ms recommended). Very fast streams can overwhelm the client's processing capacity.

πŸ’‘ Tip: You can change the interval while the server is running, and the change takes effect immediately!

πŸ“Stream Messages

List of messages to be sent. Messages are sent sequentially and loop back to the beginning when the list ends.

Message Formats:

Plain Text:

Stock price: $150.25

JSON:

{
  "symbol": "AAPL",
  "price": 150.25,
  "timestamp": 1234567890
}

XML:

<stock>
  <symbol>AAPL</symbol>
  <price>150.25</price>
</stock>

πŸ’‘ Circular Streaming: Messages are sent in sequence and automatically loop back to the beginning when the list ends. This creates an infinite loop!

Use Cases

Streaming Mode is perfect for many real-time applications:

πŸ“ˆ
Stock / Crypto Prices

Simulate live price updates.

Configuration:

Interval: 1000ms (1 second)

Messages: 10-15 different price values

Example Message:

{
  "symbol": "BTC",
  "price": 42350.75,
  "change": "+2.5%",
  "volume": 1250000000
}
🌑️
IoT Sensor Data

Simulate sensor data such as temperature, humidity, and pressure.

Configuration:

Interval: 2000ms (2 seconds)

Messages: Different sensor readings

Example Message:

{
  "sensorId": "TEMP-001",
  "temperature": 23.5,
  "humidity": 65,
  "timestamp": "2024-01-15T10:30:00Z"
}
⚽
Live Sports Scores

Match scores, statistics, and updates.

Configuration:

Interval: 3000ms (3 seconds)

Messages: Different match states

Example Messages:

Message 1: {"score": "1-0", "minute": "15"}
Message 2: {"score": "1-0", "minute": "30"}
Message 3: {"score": "2-0", "minute": "42"}
Message 4: {"score": "2-1", "minute": "68"}
πŸ“Š
System Monitoring

System metrics such as CPU, RAM, and disk usage.

Configuration:

Interval: 5000ms (5 seconds)

Messages: Different metric values

Example Message:

{
  "cpu": 45.2,
  "memory": 62.8,
  "disk": 78.5,
  "network": 125.3
}
πŸ’¬
Chat Activity Feed

Live chat messages and user activities.

Configuration:

Interval: 4000ms (4 seconds)

Messages: Different user messages

Example Messages:

Message 1: {"user": "Alice", "text": "Hello!"}
Message 2: {"user": "Bob", "text": "How are you?"}
Message 3: {"user": "Charlie", "text": "Great!"}
⏳
Progress Updates

Progress status of long-running operations.

Configuration:

Interval: 2000ms (2 seconds)

Messages: Progress values from 0% to 100%

Example Messages:

Message 1: {"progress": 10, "status": "Processing..."}
Message 2: {"progress": 25, "status": "Processing..."}
Message 3: {"progress": 50, "status": "Half way..."}
Message 4: {"progress": 75, "status": "Almost done..."}
Message 5: {"progress": 100, "status": "Complete!"}

Complete Example: Crypto Price Tracker

A real cryptocurrency exchange simulation. Continuously changing prices, volume, and percentage changes:

Configuration

Path: /live/crypto

Mode: Streaming

Interval: 1500ms (1.5 seconds)

Stream Messages (10 messages):

Message #1:

{
  "symbol": "BTC/USD",
  "price": 42150.25,
  "change": "+1.2%",
  "volume": 1250000000,
  "timestamp": 1767225600000
}

Message #2:

{
  "symbol": "BTC/USD",
  "price": 42180.5,
  "change": "+1.3%",
  "volume": 1255000000,
  "timestamp": 1767225601500
}

Message #3:

{
  "symbol": "BTC/USD",
  "price": 42165.75,
  "change": "+1.2%",
  "volume": 1260000000,
  "timestamp": 1767225603000
}

Message #4:

{
  "symbol": "BTC/USD",
  "price": 42200,
  "change": "+1.4%",
  "volume": 1270000000,
  "timestamp": 1767225604500
}

Message #5:

{
  "symbol": "BTC/USD",
  "price": 42175.25,
  "change": "+1.3%",
  "volume": 1265000000,
  "timestamp": 1767225606000
}

Message #6:

{
  "symbol": "BTC/USD",
  "price": 42190.5,
  "change": "+1.3%",
  "volume": 1275000000,
  "timestamp": 1767225607500
}

Message #7:

{
  "symbol": "BTC/USD",
  "price": 42210.75,
  "change": "+1.4%",
  "volume": 1280000000,
  "timestamp": 1767225609000
}

Message #8:

{
  "symbol": "BTC/USD",
  "price": 42195,
  "change": "+1.4%",
  "volume": 1278000000,
  "timestamp": 1767225610500
}

Message #9:

{
  "symbol": "BTC/USD",
  "price": 42220.25,
  "change": "+1.5%",
  "volume": 1285000000,
  "timestamp": 1767225612000
}

Message #10:

{
  "symbol": "BTC/USD",
  "price": 42205.5,
  "change": "+1.4%",
  "volume": 1282000000,
  "timestamp": 1767225613500
}

πŸ’‘ How It Works:

  1. Client connects
  2. A message is sent every 1.5 seconds
  3. After 10 messages, it loops back to the beginning (Message #1)
  4. Continues as an infinite loop
  5. Runs until the client disconnects
Example Client Code:
const ws = new WebSocket('ws://localhost:5678/live/crypto');

ws.onmessage = (event) => {
    const data = JSON.parse(event.data);
    console.log(`BTC Price: $${data.price} (${data.change})`);

    // Update UI
    updatePriceChart(data);
};

// Output:
// BTC Price: $42150.25 (+1.2%)
// BTC Price: $42180.50 (+1.3%)
// BTC Price: $42165.75 (+1.2%)
// ...

Best Practices

βœ… Use realistic intervals

Simulate the update frequency of your production environment.

βœ… Add variation

Add 10-20 different messages to provide realistic data diversity.

βœ… Include timestamps

Add timestamps to messages so clients can check data freshness.

βœ… Use linear values for progress simulation

Use incrementing values like 0-100 to test loading/progress UIs.

❌ Don't use very low intervals

Intervals below 100ms can overload the client and cause performance issues.

❌ Don't add too few messages

A loop with 1-2 messages repeats too quickly and isn't realistic. Add at least 5-10 messages.

Dynamic Updates

⚑Live Configuration Changes

One of Mocklantis's most powerful features: You can change everything while the server is running!

βœ“Change the interval β†’ Takes effect immediately
βœ“Add/delete messages β†’ Active in the next cycle
βœ“Change message content β†’ Updates instantly
βœ“Active connections are preserved β†’ Clients don't disconnect

Example Scenario:

  1. Server started, 3 clients connected, interval: 2000ms
  2. You changed the interval to 1000ms
  3. Clients stayed connected, new interval applied immediately
  4. You added 2 new messages
  5. New messages were included in the stream in the next cycle

Tips

πŸ’‘ Testing Strategy: Start with a large interval (5000ms) and verify the messages. Then gradually reduce the interval to achieve a realistic speed.

πŸ’‘ Data Variety: Instead of sending the same message repeatedly, create different versions with small variations (price, timestamp, etc.).

πŸ’‘ Client Disconnection Test: Test your reconnection logic by disconnecting and reconnecting the client while the stream is active.

πŸ’‘ Multiple Clients: Connect multiple clients to verify that each receives an independent stream with their own timer.

🎯 Triggered Streaming Mode

Start a stream based on client messages, send an initial response, then send periodic messages and stop automatically. Ideal for long-running operations, progressive loading, and on-demand data streams!

How Does It Work?

Triggered Streaming Mode is a combination of Conversational and Streaming Modes. The client sends a "trigger" message, Mocklantis matches it against patterns, immediately sends the initial response, then begins sending stream messages at the specified interval and stops after the specified duration/count.

Workflow:
1Client sends trigger message (e.g., "start download")
2Mocklantis checks patterns (EXACT, CONTAINS, REGEX, JSON_PATH)
3If matching pattern found, initial response is sent immediately
4Stream starts, stream messages are sent at specified interval
5Stream stops when stopAfter duration expires OR stopCount is reached
6If client sends new trigger message, cycle restarts

Configuration

In Triggered Streaming, you configure each trigger pattern separately:

🎯Incoming Pattern (Optional)

The message pattern that will trigger the stream. Supports 4 types like Conversational Mode: EXACT, CONTAINS, REGEX, JSON_PATH. If left empty, EVERY message triggers.

Examples:

start(EXACT) - Only "start" triggers
download(CONTAINS) - Messages containing "download"
^process (\d+)$(REGEX) - Like "process 123"
$.action(JSON_PATH) - JSON with action field
(empty)- EVERY message triggers

πŸ’‘ Pro Tip: With an empty pattern, you get "any message triggers stream" behavior. Perfect for using the first message as a trigger to start streaming!

⚑Initial Response

The first response sent immediately when a trigger message is received. This message comes before the stream and typically means "processing started."

Example 1: Plain Text

Processing started...

Example 2: JSON

{
  "status": "started",
  "jobId": "job-123",
  "message": "Download initiated"
}

⚠️ Important: Initial response is separate from stream messages. This is sent first, then after the interval the first stream message arrives.

πŸ“¨Stream Messages

Messages sent periodically after the initial response. The list is sent in order and loops back to the beginning (circular).

Example: Progress Updates

Message 1: {"progress": 10}
Message 2: {"progress": 25}
Message 3: {"progress": 50}
Message 4: {"progress": 75}
Message 5: {"progress": 100, "status": "complete"}

πŸ’‘ Tip: For progress simulation, use linear values (0, 20, 40, 60, 80, 100). For log streaming, add different log lines!

⏱️Interval

The wait time between stream messages (in milliseconds).

500ms→ Fast updates (2 messages per second)
1000ms→ Normal speed (1 message per second)
2000ms→ Slow updates (1 message per 2 seconds)
πŸ›‘Stop Conditions

When will the stream stop? There are two conditions - whichever occurs first stops the stream:

stopAfter (Required) - Time-based

Maximum duration in milliseconds after stream starts (Max: 600000ms = 10 minutes)

10000ms→ Stop after 10 seconds
30000ms→ Stop after 30 seconds
60000ms→ Stop after 1 minute

stopCount (Optional) - Count-based

Stop after how many stream messages? (Max: 50 messages, can be left empty)

5β†’ Stop after 5 messages
10β†’ Stop after 10 messages
(empty)β†’ Only use stopAfter

⚠️ Critical: Stream stopping logic: stopAfter OR stopCount - whichever comes first! If both are set, the first to occur wins.

Example Scenario:

  • β€’ Interval: 1000ms (1 message per second)
  • β€’ stopAfter: 10000ms (10 seconds)
  • β€’ stopCount: 5
  • β€’ Result: Stops after 5th message (5 seconds), because stopCount was reached first

Response Templating

Use {{request.message}} to reference the trigger message in your responses. Works in both Initial Response and Stream Messages!

πŸ”„
Echo Trigger Data in Responses

Example: File Processing with Trigger Data

Trigger Message:

{"action":"process","fileId":"abc123","format":"mp4"}

Initial Response Template:

{
  "status": "started",
  "fileId": "{{request.message.fileId}}",
  "format": "{{request.message.format}}",
  "jobId": "{{random.uuid}}"
}

Actual Initial Response:

{
  "status": "started",
  "fileId": "abc123",
  "format": "mp4",
  "jobId": "f7e8d9c0-1234-5678-abcd-ef0123456789"
}

Stream Messages with Trigger Data:

Stream Message Template:

{
  "fileId": "{{request.message.fileId}}",
  "progress": {{random.number(1,100)}},
  "timestamp": "{{request.timestamp}}"
}

Actual Stream Messages:

{"fileId": "abc123", "progress": 23, "timestamp": "2024-..."}
{"fileId": "abc123", "progress": 67, "timestamp": "2024-..."}
{"fileId": "abc123", "progress": 91, "timestamp": "2024-..."}

πŸ’‘ Available Variables:

  • {{request.message}} - Full trigger message
  • {{request.message.field}} - Specific JSON field
  • {{request.message.nested.field}} - Nested field access
  • {{request.timestamp}} - Current ISO timestamp
  • {{random.uuid}} - Random UUID (and other random variables)

⚠️ Note: The trigger message is captured when the stream starts. All stream messages reference the same trigger message throughout the stream.

Use Cases

Triggered Streaming Mode is ideal for the most complex and realistic scenarios:

πŸ“€
File Upload Progress

Send progress updates while a file is being uploaded.

Configuration:

Pattern: upload (CONTAINS)

Initial Response: {"status":"uploading","fileId":"f123"}

Interval: 500ms

stopAfter: 10000ms (10 seconds)

stopCount: 20

Stream Messages:

{"progress": 5, "bytesUploaded": 512000}
{"progress": 10, "bytesUploaded": 1024000}
{"progress": 20, "bytesUploaded": 2048000}
...
{"progress": 100, "status": "complete"}
🎬
Video Processing

Progress and log messages during video encoding/decoding.

Configuration:

Pattern: $.action (JSON_PATH)

Trigger: {"action":"encode","video":"vid.mp4"}

Initial Response: {"status":"encoding","jobId":"enc-456"}

Interval: 2000ms

stopAfter: 60000ms (1 minute)

Stream Messages:

{"stage": "analyzing", "progress": 10}
{"stage": "encoding", "progress": 30}
{"stage": "encoding", "progress": 60}
{"stage": "finalizing", "progress": 90}
{"stage": "complete", "progress": 100, "url": "..."}
πŸ€–
AI Text Generation

ChatGPT-style streaming response - delivered word by word.

Configuration:

Pattern: generate (CONTAINS)

Initial Response: {"thinking": true}

Interval: 300ms

stopAfter: 15000ms

stopCount: 30

Stream Messages (word by word):

{"token": "The"}
{"token": "quick"}
{"token": "brown"}
{"token": "fox"}
{"token": "jumps"}
...
{"done": true}
πŸ”¨
Build / Deploy Process

CI/CD pipeline logs, build progress, deployment stages.

Configuration:

Pattern: ^deploy (.*)$ (REGEX)

Trigger: deploy production

Initial Response: Deployment started...

Interval: 1500ms

stopAfter: 45000ms

Stream Messages:

[LOG] Building Docker image...
[LOG] Running tests...
[LOG] Tests passed βœ“
[LOG] Pushing to registry...
[LOG] Deploying to k8s...
[LOG] Deployment complete! βœ“
πŸ’Ύ
Large Data Export

Large data export process, chunk-by-chunk progress.

Configuration:

Pattern: (empty) - EVERY message triggers

Initial Response: {"exportId":"exp-789","status":"started"}

Interval: 1000ms

stopAfter: 30000ms

stopCount: 15

Stream Messages:

{"chunk": 1, "records": 1000, "progress": 10}
{"chunk": 2, "records": 2000, "progress": 20}
...
{"chunk": 10, "records": 10000, "progress": 100, "downloadUrl": "..."}
πŸ”
Live Search Results

When user searches, results arrive progressively.

Configuration:

Pattern: search (CONTAINS)

Initial Response: {"searching": true}

Interval: 800ms

stopAfter: 10000ms

stopCount: 10

Stream Messages:

{"results": [{"id": 1, "title": "Result 1"}]}
{"results": [{"id": 2, "title": "Result 2"}]}
{"results": [{"id": 3, "title": "Result 3"}]}
...
{"done": true, "totalResults": 10}

Complete Example: Video Upload & Processing

A real video upload scenario. User types "process video.mp4", upload starts, progress arrives, encoded, completed:

Configuration

Path: /api/video-process

Mode: Triggered Streaming

Trigger Pattern:

Incoming Pattern: ^process (.*)$

Match Type: REGEX

Initial Response:

{
  "status": "processing_started",
  "jobId": "job-abc123",
  "filename": "video.mp4",
  "timestamp": 1234567890
}

Stream Configuration:

Interval: 1000ms

stopAfter: 30000ms (30 seconds)

stopCount: 15 messages

Stream Messages (15 messages):

Message #1 (T+1s):

{
  "stage": "uploading",
  "progress": 10,
  "message": "Uploading file..."
}

Message #2 (T+2s):

{
  "stage": "uploading",
  "progress": 25,
  "message": "Uploading file..."
}

Message #3 (T+3s):

{
  "stage": "uploading",
  "progress": 50,
  "message": "Upload in progress..."
}

Message #4 (T+4s):

{
  "stage": "uploading",
  "progress": 75,
  "message": "Almost uploaded..."
}

Message #5 (T+5s):

{
  "stage": "uploaded",
  "progress": 100,
  "message": "Upload complete!"
}

Message #6 (T+6s):

{
  "stage": "analyzing",
  "progress": 100,
  "message": "Analyzing video..."
}

Message #7 (T+7s):

{
  "stage": "encoding",
  "progress": 15,
  "message": "Encoding started..."
}

Message #8 (T+8s):

{
  "stage": "encoding",
  "progress": 30,
  "message": "Encoding 30%..."
}

Message #9 (T+9s):

{
  "stage": "encoding",
  "progress": 50,
  "message": "Encoding 50%..."
}

Message #10 (T+10s):

{
  "stage": "encoding",
  "progress": 70,
  "message": "Encoding 70%..."
}

Message #11 (T+11s):

{
  "stage": "encoding",
  "progress": 90,
  "message": "Almost done..."
}

Message #12 (T+12s):

{
  "stage": "encoding",
  "progress": 100,
  "message": "Encoding complete!"
}

Message #13 (T+13s):

{
  "stage": "finalizing",
  "progress": 100,
  "message": "Finalizing..."
}

Message #14 (T+14s):

{
  "stage": "thumbnail",
  "progress": 100,
  "message": "Generating thumbnail..."
}

Message #15 (T+15s):

{
  "stage": "complete",
  "progress": 100,
  "message": "All done!",
  "videoUrl": "https://cdn.example.com/video-abc123.mp4",
  "thumbnailUrl": "https://cdn.example.com/thumb-abc123.jpg"
}

🎬 Timeline:

  1. T=0s: User sends "process video.mp4"
  2. T=0s: Initial response sent immediately
  3. T=1s: Message #1 (Upload 10%)
  4. T=2s: Message #2 (Upload 25%)
  5. ...
  6. T=15s: Message #15 (Complete!) β†’ Stream stops (stopCount reached)
  7. Total duration: 15 seconds (stopCount came first, didn't reach stopAfter 30s)
Client Code Example:
const ws = new WebSocket('ws://localhost:5678/api/video-process');

ws.onopen = () => {
    // Trigger stream
    ws.send('process video.mp4');
};

ws.onmessage = (event) => {
    const data = JSON.parse(event.data);

    if (data.status === 'processing_started') {
        showProgressBar();
        console.log('Job ID:', data.jobId);
    } else if (data.stage) {
        updateProgress(data.progress, data.message);

        if (data.stage === 'complete') {
            hideProgressBar();
            showSuccessMessage(data.videoUrl);
            ws.close();
        }
    }
};

// Output:
// Job ID: job-abc123
// [Progress: 10%] Uploading file...
// [Progress: 25%] Uploading file...
// ...
// [Progress: 100%] All done!
// Video URL: https://cdn.example.com/video-abc123.mp4

Stream Locking (Important!)

A critical feature in Triggered Streaming: New trigger messages are ignored while a stream is active!

⚠️ Why?

To prevent multiple streams from starting simultaneously. Otherwise, messages would get mixed up and it would be unclear which stream they belong to.

Scenario:

  1. T=0s: User sends "start" β†’ Stream 1 starts
  2. T=2s: User sends "start" again β†’ IGNORED! (Stream 1 still active)
  3. T=15s: Stream 1 stops (stopAfter or stopCount)
  4. T=17s: User sends "start" β†’ Stream 2 starts (Stream 1 finished, now accepted)

πŸ’‘ Testing Tip: Send multiple trigger messages while stream is active to test the lock mechanism. You should see "Ignoring message (stream active)" in backend logs!

Dynamic Configuration

⚑Live Updates (No Restart!)

You can change trigger patterns, stream messages, intervals, and stop conditions while the server is running!

βœ“Change interval β†’ Applied immediately in active stream
βœ“Add stream message β†’ Used in next cycle
βœ“Change stopAfter/stopCount β†’ Applied to active stream
βœ“Change pattern β†’ Takes effect for new triggers

Example Scenario:

  1. Stream started, interval: 2000ms, stopAfter: 30000ms
  2. After 5 seconds you changed interval to 1000ms
  3. Active stream immediately switched to 1s interval
  4. You reduced stopAfter to 15000ms
  5. Stream stopped at 15 seconds (new stopAfter value applied)

Best Practices

βœ… Always send initial response

Immediately notify the client that processing has started - critical for UX!

βœ… Make progress linear

Use predictable increments like 0, 20, 40, 60, 80, 100.

βœ… Add "done" flag in final message

Let client know stream is finished: {"done": true}

βœ… Always set stopAfter

Required field, specify maximum duration (max 10 minutes).

βœ… Control message count with stopCount

For progress scenarios where message count is known (0-100 = 10 messages), use stopCount.

❌ Don't add too many stream messages

5-20 messages is sufficient, they repeat because it's circular.

❌ Don't set stopAfter too large

10-60 seconds is sufficient for testing, max 10 minutes.

Advanced Tips

πŸ’‘ Multiple Trigger Patterns: Define different streams for different triggers. Example: "upload" β†’ upload stream, "process" β†’ processing stream.

πŸ’‘ Empty Pattern Trick: If you leave pattern empty, EVERY message starts a stream. Perfect for "first message triggers" behavior.

πŸ’‘ Stream Lock Testing: Send 5-10 messages while stream is active to test the lock mechanism. Watch the backend logs!

πŸ’‘ Progressive Content: To simulate AI generation, add one word in each message, combine on the client to show full text.

πŸ’‘ Error Simulation: Test client's error handling by sending error messages mid-stream:{"error": "Connection lost", "retry": true}

πŸ”„ Lifecycle Events

Send automatic messages throughout the WebSocket connection lifecycle. Welcome messages when clients connect, goodbye messages when disconnecting, and more!

What are Lifecycle Events?

Lifecycle Events are events that are automatically triggered at specific moments during a WebSocket connection. Mocklantis currently supports the onConnect event.

Supported Events:
ACTIVE

onConnect

Triggered when client connects to the WebSocket

FUTURE

onDisconnect

When client disconnects (may be added in the future)

FUTURE

onError

When an error occurs in the connection (may be added in the future)

onConnect Event

Automatically sends a message when the client connects to the WebSocket. Ideal for welcome messages, session information, initial state, and similar data.

πŸ’¬Message

The message to send when the client connects. Can be plain text or JSON.

Plain Text Example:

Welcome to the chat! You are now connected.

JSON Example:

{
  "event": "connected",
  "sessionId": "sess-abc123",
  "timestamp": 1234567890,
  "message": "Welcome! You are now connected."
}
⏱️Delay (Optional)

How many milliseconds after connection to send the message. Default: 0ms (instant)

0ms→ Send instantly (default)
500ms→ After 0.5 seconds
1000ms→ After 1 second
3000ms→ After 3 seconds

πŸ’‘ When to use delay?

  • To simulate server "preparation" time
  • To test connection handshake delay
  • UX: To show "Connecting..." animation

Use Cases

The onConnect event can be used for many scenarios:

πŸ‘‹
Welcome Message

Notify the user that the connection was successful.

Configuration:

Message: Welcome to our chat! You are now online.

Delay: 0ms

🎫
Session Information

Send session ID, user info, and connection details to the client.

Configuration:

Message:

{
  "event": "session_created",
  "sessionId": "sess-abc123",
  "userId": "user-456",
  "connectedAt": 1234567890,
  "expiresIn": 3600
}

Delay: 0ms

πŸ“Š
Initial State / Data

Send initial data to the client when connection is established.

Configuration:

Message:

{
  "type": "initial_state",
  "data": {
    "unreadMessages": 5,
    "onlineUsers": 42,
    "serverVersion": "1.2.3"
  }
}

Delay: 0ms

βœ…
Server Status

Inform about server health, capabilities, and feature flags.

Configuration:

Message:

{
  "server": {
    "status": "healthy",
    "version": "2.1.0",
    "features": ["chat", "files", "video"],
    "maintenance": false
  }
}

Delay: 0ms

πŸ”
Authentication Confirmation

Confirm that authentication was successful and the token is valid.

Configuration:

Message:

{
  "auth": "success",
  "user": {
    "id": "user-123",
    "username": "john_doe",
    "role": "premium"
  },
  "permissions": ["read", "write", "admin"]
}

Delay: 500ms (auth check simulation)

πŸ“–
Instructions / Help

Tell the user how to use the system and what commands are available.

Configuration:

Message:

Welcome! Available commands:
/help - Show this message
/status - Check server status
/users - List online users
/quit - Disconnect

Delay: 1000ms (for connection animation)

Timeline Example

See how the onConnect event works with different modes:

Scenario: Conversational Mode + onConnect

Configuration:

  • β€’ Mode: Conversational
  • β€’ onConnect Message: "Welcome! Send 'hello' to start."
  • β€’ onConnect Delay: 500ms
  • β€’ Pattern 1: "hello" β†’ "Hi there!"

Timeline:

  1. T=0ms: Client connects to WebSocket
  2. T=500ms: onConnect message arrives: "Welcome! Send 'hello' to start."
  3. T=2000ms: User sends "hello"
  4. T=2000ms: Pattern matches, response sent: "Hi there!"
Scenario: Streaming Mode + onConnect

Configuration:

  • β€’ Mode: Streaming
  • β€’ onConnect Message: "Stream starting..."
  • β€’ onConnect Delay: 0ms
  • β€’ Streaming Interval: 2000ms
  • β€’ Stream Messages: ["Message 1", "Message 2", "Message 3"]

Timeline:

  1. T=0ms: Client connects to WebSocket
  2. T=0ms: onConnect message arrives immediately: "Stream starting..."
  3. T=0ms: Streaming timer starts
  4. T=2000ms: Stream message #1: "Message 1"
  5. T=4000ms: Stream message #2: "Message 2"
  6. T=6000ms: Stream message #3: "Message 3"
  7. T=8000ms: Back to Message 1 (circular)
Scenario: Triggered Streaming + onConnect

Configuration:

  • β€’ Mode: Triggered Streaming
  • β€’ onConnect Message: {"ready": true, "instruction": "Send 'start' to begin"}
  • β€’ onConnect Delay: 1000ms
  • β€’ Trigger Pattern: "start"
  • β€’ Initial Response: "Processing..."
  • β€’ Stream Interval: 1000ms

Timeline:

  1. T=0ms: Client connects to WebSocket
  2. T=1000ms: onConnect message arrives: {"ready": true, ...}
  3. T=3000ms: User sends "start"
  4. T=3000ms: Initial response sent immediately: "Processing..."
  5. T=4000ms: Stream message #1
  6. T=5000ms: Stream message #2
  7. ...

Best Practices

βœ… Always add an onConnect message

Inform the client that the connection was successful - important for UX!

βœ… Use JSON format

Sending structured data makes client parsing easier.

βœ… Add an event type field

{"event": "connected", ...} helps the client easily distinguish message types.

βœ… Simulate realistic latency with delay

Add 500-1000ms delay to simulate authentication checks.

βœ… Include useful info

Add information like session ID, timestamp, server version.

❌ Don't send overly long messages

onConnect should be short and concise, use a separate endpoint for large data.

❌ Don't set delay too high

More than 3 seconds keeps the user waiting, 1-2 seconds is ideal.

Comprehensive Example: Chat Application

onConnect configuration for a production-ready chat application:

Configuration

Path: /chat/room-123

Mode: Conversational

onConnect Event:

Delay: 800ms

Message:

{
  "event": "room_joined",
  "roomId": "room-123",
  "roomName": "General Chat",
  "sessionId": "sess-abc456",
  "user": {
    "id": "user-789",
    "username": "john_doe",
    "avatar": "https://example.com/avatar.jpg"
  },
  "stats": {
    "onlineUsers": 42,
    "totalMessages": 1250
  },
  "capabilities": {
    "canSendImages": true,
    "canSendFiles": true,
    "maxFileSize": 10485760
  },
  "commands": [
    "/help - Show available commands",
    "/users - List online users",
    "/history - Show message history"
  ],
  "timestamp": 1234567890
}
Client Implementation
const ws = new WebSocket('ws://localhost:5678/chat/room-123');

ws.onopen = () => {
    console.log('Connecting to chat room...');
    showLoadingSpinner();
};

ws.onmessage = (event) => {
    const data = JSON.parse(event.data);

    if (data.event === 'room_joined') {
        // onConnect message received!
        hideLoadingSpinner();

        // Display welcome info
        showWelcomeMessage(`Welcome to ${data.roomName}!`);

        // Update UI with room stats
        updateOnlineCount(data.stats.onlineUsers);

        // Store session info
        localStorage.setItem('sessionId', data.sessionId);

        // Display available commands
        showCommandsPanel(data.commands);

        console.log('Connected to room:', data.roomId);
        console.log('Online users:', data.stats.onlineUsers);
    } else {
        // Handle other message types (chat messages, etc.)
        handleChatMessage(data);
    }
};

// Expected Console Output:
// > Connecting to chat room...
// (800ms delay...)
// > Connected to room: room-123
// > Online users: 42

Tips

πŸ’‘ Loading States: Use delay to test your "Connecting..." or "Authenticating..." loading states.

πŸ’‘ Error Testing: Test your connection failure handling by sending an error in the onConnect message:{"error": "Room full"}

πŸ’‘ Version Checking: Send the server version in onConnect to test client version compatibility checks.

πŸ’‘ Initial State Sync: Send initial data with onConnect to test the client's first-render optimization.

πŸ”₯ Advanced Options & Tips

Advanced features, debugging techniques, troubleshooting, and production-level test strategies for mastering Mocklantis!

πŸ‘₯ Multi-Client Testing

Simulate real-world scenarios by testing with multiple clients:

🌐Multiple Browser Tabs

The easiest method: Open clients in different tabs of the same browser.

Test Steps:

  1. Connect WebSocket client in the first tab
  2. Open a second tab, connect to the same endpoint
  3. Open a third tab, connect to the same endpoint again
  4. Monitor messages with console.log in each tab
  5. Streaming: Does each tab receive an independent stream? βœ“
πŸ”€Different Browsers

Perform cross-browser testing with Chrome, Firefox, and Safari.

Why is it important?

  • WebSocket implementations may differ
  • Reconnection logic varies by browser
  • Message handling timings are different
πŸ“±Mobile + Desktop

Realistic multi-platform testing with phone + computer.

Setup:

  1. Start Mocklantis server on your computer
  2. Find your IP address (e.g., 192.168.1.5)
  3. Open browser on your phone
  4. Connect to ws://192.168.1.5:5678/your-path
  5. Test desktop + mobile simultaneously!

🎯 Advanced Test Scenarios

Advanced strategies for production-level test scenarios:

πŸ”„
Reconnection Logic Testing

Simulate network loss and verify the client reconnects.

Test Steps:

  1. Connect client, start stream
  2. Stop the Mocklantis server
  3. Reconnection logic is triggered in the client
  4. Restart the server
  5. Does the client reconnect automatically? βœ“
  6. Does the stream continue? βœ“
⚑
Race Condition Testing

Catch race conditions with rapid sequential messages.

Test Code:

// Rapid fire messages
for (let i = 0; i < 10; i++) {
    ws.send(`message ${i}`);
}

// Does the client handle it properly?
// Is message order preserved?
// Does the UI freeze?
πŸ’Ύ
Memory Leak Testing

Run for a long time and check for memory leaks.

Test Strategy:

  1. Start streaming mode (interval: 1000ms)
  2. Open Browser DevTools β†’ Performance β†’ Memory
  3. Run for 10-15 minutes
  4. Does memory usage increase or stay constant?
  5. Disconnect β†’ Reconnect
  6. Is memory cleaned up? βœ“
🚨
Error Recovery Testing

Test error handling by sending invalid JSON and malformed data.

Test Messages:

// Invalid JSON
ws.send('{invalid json}');

// Huge message
ws.send('x'.repeat(1000000));

// Special characters
ws.send('\x00\x01\x02');

// Does the client crash?
// Does it show an error message?
// Can it recover?
πŸ”€
Concurrent Operations

Send messages and receive streams simultaneously.

Scenario:

  • Conversational Mode + onConnect active
  • onConnect message arrives
  • User sends a message simultaneously
  • Receives response
  • Sends another message simultaneously
  • Are all messages handled correctly? βœ“
πŸ“Š
Load Testing

Perform stress tests with multiple clients.

Test Script:

// Node.js script
const WebSocket = require('ws');

// Create 50 clients
for (let i = 0; i < 50; i++) {
    const ws = new WebSocket('ws://localhost:5678/chat');
    ws.on('open', () => {
        console.log(`Client ${i} connected`);
    });
}

// Does the UI stay responsive?
// Is the server stable?
// Are messages distributed correctly?

πŸ”§ Troubleshooting

Common problems and solutions:

❌ Can't connect

  • Is the server running? Are there logs in the terminal?
  • Is the port correct? (ws://localhost:5678/path)
  • Is the path correct? (does it start with /)?
  • Is the firewall blocking it?

⚠️ Pattern not matching

  • Is the match type correct? (EXACT is very strict)
  • Is there whitespace? ("hello " β‰  "hello")
  • Case-sensitive! ("Hello" β‰  "hello")
  • JSON_PATH: Does the field actually exist?
  • REGEX: Is the syntax correct? Test it

πŸ’¬ Not receiving messages

  • Is the ws.onmessage handler defined?
  • Are there any errors in the console?
  • There might be a JSON.parse error, add try-catch
  • Is there "Sent response" in the backend log?

πŸ”„ Stream not starting

  • Streaming: Are messages empty?
  • Triggered: Make sure you sent the trigger message
  • Triggered: Is the stream already active? (check logs)
  • Are interval and stopAfter set correctly?

⚑ Slow performance

  • Is the interval too low? (min 100ms)
  • Are there too many patterns? (10-20 ideal)
  • Is the regex too complex?
  • Is the message size too large?
  • Are too many clients connected?

πŸ’Ž Pro Tips & Tricks

πŸ’‘ Tip #1: Pattern Priority Testing
Add 2 patterns that match the same message. Does the first one work? Test if priority is correct.

πŸ’‘ Tip #2: Timestamp in Every Message
Add timestamps to responses to measure client-side latency.

πŸ’‘ Tip #3: Message ID
Give each message a unique ID to test duplicate detection.

πŸ’‘ Tip #4: Error Messages
Test error handling by sending {"error": "...", "code": 500} in the middle of a stream.

πŸ’‘ Tip #5: Progressive Enhancement
Start with simple patterns, increase complexity as you verify they work.

πŸ’‘ Tip #6: Multiple Endpoints
Use different paths for different features: /chat, /notifications, /data

πŸ’‘ Tip #7: Browser DevTools Network Tab
You can see WebSocket frames in the Network tab, check the raw data.

πŸ’‘ Tip #8: Save Configurations
Export your Mocklantis configs as JSON (future feature), share with your team.

SSE Endpoints

Simulate real-time server-to-client streaming with Server-Sent Events. Perfect for live notifications, AI streaming responses, real-time feeds, and more!

What is SSE?

Server-Sent Events (SSE) is a standard that enables servers to push real-time updates to clients over HTTP. Unlike WebSockets, SSE is unidirectional (server-to-client only), making it simpler and perfect for scenarios where the client only needs to receive data.

SSE vs WebSocket
FeatureSSEWebSocket
DirectionServer β†’ Client (unidirectional)Bidirectional
ProtocolHTTP/HTTPSWS/WSS
Auto ReconnectBuilt-inManual
Event IDsBuilt-inManual
ComplexitySimpleMore complex

When to use SSE: When you need real-time updates from server to client without client sending data back. Examples: notifications, live feeds, AI streaming responses, stock tickers.

Quick Start

Creating an SSE endpoint takes just 3 steps:

  1. Set the path: /events
  2. Configure streaming: Set interval and optional retry
  3. Add messages: Create the events to be streamed
Example: Basic Notification Stream

Path: /notifications

Interval: 3000ms

Messages:

Message 1: {"type": "info", "message": "New user signed up"}
Message 2: {"type": "alert", "message": "Server CPU at 80%"}
Message 3: {"type": "success", "message": "Backup completed"}

Path Configuration

Define the access path for your SSE endpoint. The path must always start with /.

Example Paths:
βœ“/events- General event stream
βœ“/api/v1/notifications- Notification stream
βœ“/stream/prices- Price updates
βœ“/chat/completions- AI response streaming

βœ… Live Updates: You can change the path while the server is running - active connections are preserved!

SSE Protocol Format

Mocklantis generates SSE messages following the official specification. Here's how the wire format looks:

Wire Format:
id: 1
event: notification
data: {"type": "info", "message": "Hello!"}

id: 2
event: notification
data: {"type": "alert", "message": "Warning!"}
SSE Fields:
id:

Event ID

Unique identifier for reconnection support

event:

Event Type

Custom event name (defaults to "message")

data:

Event Data

The actual message content (JSON, text, etc.)

retry:

Retry Interval

Client reconnection delay in milliseconds

πŸ’‘ Multi-line Data: When your data contains multiple lines, each line is automatically prefixed with data:. The client reconstructs the original content.

Streaming Settings

⏱️Interval (ms)

Time between messages in milliseconds. This is the base timing for your stream.

500ms→ Fast (2 messages per second)
1000ms→ Normal (1 message per second)
5000ms→ Slow (1 message per 5 seconds)
πŸ”„Retry (ms) - Optional

Tells the client how long to wait before attempting to reconnect if the connection drops. This value is sent once when the connection is established.

1000ms→ Quick reconnect (1 second)
3000ms→ Default browser behavior
10000ms→ Conservative (10 seconds)
πŸ”–Support Last-Event-ID

When enabled, clients can resume from where they left off after a disconnection.

How it works:

  1. Client connects and receives events with IDs (1, 2, 3...)
  2. Connection drops at event ID 5
  3. Client reconnects with header: Last-Event-ID: 5
  4. Server resumes from event ID 6

⚠️ Important: For this to work, you must assign unique IDs to your messages!

Messages Configuration

Configure the messages that will be streamed to clients. Messages are sent sequentially and loop back to the beginning when the list ends.

Message Fields:
Event Type- optional

Custom event name. Clients can listen for specific event types using addEventListener().

Examples: message, notification, update, error

Event ID- optional

Unique identifier for the event. Required for Last-Event-ID reconnection support.

Formats: 1, 2, 3 (sequential), uuid, timestamp

Data- required

The actual content of the event. Can be JSON, plain text, XML, or any format.

{"type": "notification", "message": "Hello!"}
Delay (ms)- optional

Additional delay before sending this specific message. Added on top of the interval.

Example: Interval=1000ms, Delay=5000ms β†’ This message waits 6 seconds total

πŸ’‘ Random Variables: You can use random variables in your data! Use {{$randomInt}}, {{$randomEmail}}, etc. to generate dynamic content.

On Connect Message

Optionally send a welcome message immediately when a client connects, before the regular stream starts.

Use Cases:
βœ“Send connection confirmation
βœ“Provide initial state or configuration
βœ“Send authentication acknowledgment
βœ“Deliver cached/historical data before live stream

Example On Connect Message:

{
  "status": "connected",
  "server": "mocklantis-sse-v1",
  "timestamp": "2024-01-15T10:30:00Z"
}

Connection Examples

Connect to your SSE endpoint using these examples:

JavaScript (EventSource)
const eventSource = new EventSource('http://localhost:5678/events');

// Listen for default "message" events
eventSource.onmessage = (event) => {
    const data = JSON.parse(event.data);
    console.log('Received:', data);
};

// Listen for custom event types
eventSource.addEventListener('notification', (event) => {
    console.log('Notification:', event.data);
});

// Handle connection events
eventSource.onopen = () => console.log('Connected!');
eventSource.onerror = () => console.log('Error/Reconnecting...');

// Close connection
// eventSource.close();
curl
# Basic connection
curl -N http://localhost:5678/events

# With Last-Event-ID (resume from event 5)
curl -N -H "Last-Event-ID: 5" http://localhost:5678/events

# The -N flag disables buffering for real-time output
Python (sseclient)
import sseclient
import requests

url = 'http://localhost:5678/events'
response = requests.get(url, stream=True)
client = sseclient.SSEClient(response)

for event in client.events():
    print(f'Event: {event.event}')
    print(f'ID: {event.id}')
    print(f'Data: {event.data}')
    print('---')
Node.js (eventsource)
import EventSource from 'eventsource';

const es = new EventSource('http://localhost:5678/events');

es.onmessage = (event) => {
    console.log('Data:', event.data);
};

es.addEventListener('notification', (event) => {
    console.log('Notification:', JSON.parse(event.data));
});

Real-World Use Cases

SSE is used across many industries for real-time data delivery. Here are practical examples:

πŸ€–
AI/LLM Response Streaming

Mock ChatGPT, Claude, or any AI API that streams responses token by token.

Configuration:

Path: /v1/chat/completions

Interval: 50ms (fast token streaming)

Example Messages (OpenAI format):

{"choices":[{"delta":{"content":"Hello"}}]}
{"choices":[{"delta":{"content":" there"}}]}
{"choices":[{"delta":{"content":"!"}}]}
{"choices":[{"delta":{"content":" How"}}]}
{"choices":[{"delta":{"content":" can"}}]}
{"choices":[{"delta":{"content":" I"}}]}
{"choices":[{"delta":{"content":" help"}}]}
{"choices":[{"delta":{"content":"?"}}]}
[DONE]

Industry: AI/ML platforms, chatbots, coding assistants, content generation

πŸ””
Real-Time Notifications

Push notifications for social apps, e-commerce, or enterprise dashboards.

Configuration:

Path: /api/notifications

Interval: 5000ms

Example Messages:

{"type": "like", "user": "john", "message": "liked your post"}
{"type": "comment", "user": "jane", "message": "commented on your photo"}
{"type": "follow", "user": "mike", "message": "started following you"}
{"type": "order", "message": "Your order #1234 has shipped"}

Industry: Social media, e-commerce, SaaS platforms, mobile apps

πŸ“ˆ
Stock & Crypto Prices

Real-time price feeds for trading platforms and financial dashboards.

Configuration:

Path: /stream/prices

Interval: 1000ms

Example Messages:

{"symbol": "BTC", "price": 42350.75, "change": "+2.5%"}
{"symbol": "ETH", "price": 2245.30, "change": "+1.8%"}
{"symbol": "AAPL", "price": 178.25, "change": "-0.3%"}

Industry: FinTech, trading platforms, crypto exchanges, banking

⚽
Live Sports Scores

Real-time match updates, scores, and statistics.

Configuration:

Path: /live/match/123

Interval: 3000ms

Example Messages:

{"minute": 15, "score": "1-0", "event": "GOAL", "team": "home"}
{"minute": 32, "score": "1-1", "event": "GOAL", "team": "away"}
{"minute": 45, "event": "HALFTIME"}
{"minute": 78, "score": "2-1", "event": "GOAL", "team": "home"}

Industry: Sports betting, live score apps, sports media

🌑️
IoT Sensor Data

Stream sensor readings from IoT devices - temperature, humidity, motion, etc.

Configuration:

Path: /sensors/room-1

Interval: 2000ms

Example Message:

{
  "sensorId": "TEMP-001",
  "temperature": 23.5,
  "humidity": 65,
  "pressure": 1013.25,
  "timestamp": "2024-01-15T10:30:00Z"
}

Industry: Smart home, industrial IoT, agriculture, healthcare monitoring

πŸ“‹
Log Streaming

Real-time log output for monitoring dashboards and debugging tools.

Configuration:

Path: /logs/stream

Interval: 500ms

Event Types: info, warn, error

Example Messages:

{"level": "info", "message": "User login: [email protected]"}
{"level": "warn", "message": "High memory usage: 85%"}
{"level": "error", "message": "Database connection timeout"}
{"level": "info", "message": "Request processed in 234ms"}

Industry: DevOps, monitoring tools, debugging platforms, observability

πŸ“°
News & Social Feeds

Real-time content updates for news sites, social platforms, and content aggregators.

Configuration:

Path: /feed/latest

Interval: 10000ms

Example Messages:

{"type": "article", "title": "Breaking: New Tech Release", "author": "TechNews"}
{"type": "tweet", "user": "@elonmusk", "content": "Exciting announcement..."}
{"type": "post", "user": "jane_doe", "content": "Just shipped v2.0!"}

Industry: Media, journalism, social networks, content platforms

⏳
Progress & Status Updates

Track long-running operations like file uploads, data processing, or deployments.

Configuration:

Path: /jobs/123/status

Interval: 2000ms

Example Messages:

{"progress": 10, "status": "Initializing..."}
{"progress": 30, "status": "Processing files..."}
{"progress": 60, "status": "Analyzing data..."}
{"progress": 90, "status": "Finalizing..."}
{"progress": 100, "status": "Complete!", "result": "success"}

Industry: CI/CD, file hosting, data processing, cloud services

Best Practices

βœ… Use appropriate intervals

Match your production environment. AI streaming: 50-100ms. Notifications: 3-5s. Metrics: 1-2s.

βœ… Add Event IDs for reliability

Enable Last-Event-ID support so clients can resume after disconnection.

βœ… Use Event Types for filtering

Categorize events so clients can subscribe to specific types they care about.

βœ… Include timestamps

Add timestamps to messages for client-side freshness checks.

βœ… Test reconnection scenarios

Verify your client handles reconnection gracefully with Last-Event-ID.

❌ Don't use too few messages

Add variety - 5-10+ messages create more realistic simulations.

❌ Don't forget the retry field

Set a reasonable retry interval so clients don't hammer your server on reconnection.

Dynamic Updates

⚑Live Configuration Changes

Mocklantis allows real-time configuration changes without server restart or client disconnection!

βœ“Change interval β†’ Takes effect immediately
βœ“Add/remove messages β†’ Active in next cycle
βœ“Edit message content β†’ Updates instantly
βœ“Active connections preserved β†’ Clients stay connected

Example Scenario:

  1. Server running, 5 clients connected, interval: 2000ms
  2. You change interval to 500ms
  3. All clients stay connected, new interval applies immediately
  4. You add 3 new messages
  5. New messages appear in the stream on next cycle

Testing Tips

πŸ’‘ Browser DevTools: Open Network tab, filter by "EventStream" to see SSE connections and inspect incoming messages in real-time.

πŸ’‘ Multiple Clients: Open multiple browser tabs to verify each client receives their own independent stream.

πŸ’‘ Test Reconnection: Kill the connection (curl Ctrl+C) and reconnect with Last-Event-ID header to verify resume functionality.

πŸ’‘ Verify Event Types: Use different event types and ensure your client'saddEventListener() calls receive the correct events.

Summary

Mocklantis SSE provides a complete Server-Sent Events implementation with all the features you need:

Features:

  • βœ“ Event Types
  • βœ“ Event IDs
  • βœ“ Per-message Delay
  • βœ“ Retry Interval
  • βœ“ Last-Event-ID Support
  • βœ“ On Connect Message
  • βœ“ Random Variables

Benefits:

  • βœ“ Zero Config Complexity
  • βœ“ Live Updates
  • βœ“ No Server Restart
  • βœ“ Connection Preserved
  • βœ“ Protocol Compliant
  • βœ“ Production-Ready

Import & Export

Mocklantis provides powerful import features to help you quickly create endpoints from existing APIs. Access these features from the Settings button in the top-right corner of the workspace, under the "Import & Export" section.

πŸ“₯Import from Curl

Convert any curl command into a fully configured mock endpoint. Perfect for replicating existing API calls.

πŸ“ How to Import from Curl:

  1. Click the Settings button in the top-right corner
  2. Under "Import & Export" section, select "Import from Curl"
  3. Paste your curl command in the text area
  4. Click "Import"
  5. Mocklantis will automatically detect:
    • HTTP method (GET, POST, PUT, DELETE, etc.)
    • Endpoint path and query parameters
    • Request headers (Authorization, Content-Type, etc.)
    • Request body (if present)
  6. A new endpoint will be created with all detected parameters

Example Curl Command:

curl -X POST "http://localhost:3000/api/users?page=1" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer token123" \
  --data '{"name":"John","email":"[email protected]"}'

What gets imported:

  • Method: POST
  • Path: /api/users
  • Query Param: page=1 (type: number)
  • Request Headers: Content-Type, Authorization
  • Request Body: {"name":"John","email":"[email protected]"}

πŸ’‘ Pro Tip: You can copy curl commands directly from your browser's Network tab or API documentation!

πŸ“„Import from OpenAPI/Swagger

Import endpoints from OpenAPI/Swagger specifications. Supports URL, local file, or pasted JSON/YAML.

πŸ“ Import Options:

  • URL Import: Paste an OpenAPI spec URL and Mocklantis will fetch and parse it
  • File Import: Upload a local JSON or YAML OpenAPI file
  • Paste: Directly paste your OpenAPI JSON or YAML content

πŸ’‘ Pro Tip: OpenAPI import creates endpoints with example responses from your spec!

πŸ”„Complete Workflow Example

Scenario: Copying an endpoint from production to Mocklantis

  1. Step 1: Open your browser's Developer Tools (F12)
  2. Step 2: Go to the Network tab and make a request to the API
  3. Step 3: Right-click the request β†’ Copy β†’ Copy as cURL
  4. Step 4: In Mocklantis, click Settings β†’ Import & Export β†’ "Import from Curl"
  5. Step 5: Paste the curl command and click Import
  6. Step 6: Customize the response body with your mock data
  7. Step 7: Start the server and test your mock endpoint

🎯 Result: You've successfully created a mock endpoint that mirrors your production API!

Pro Tips

πŸ’‘ Use curl import to quickly replicate any API call without manual configuration

πŸ’‘ Import from OpenAPI spec to generate multiple endpoints at once

βœ… After importing, always review and customize the response body to match your testing needs

Logging & Request Monitoring

Mocklantis provides real-time request logging and monitoring for all HTTP requests hitting your mock servers. The Logs panel shows you every detail of incoming requests and outgoing responses, similar to browser DevTools Network tab.

Located at the bottom of the application, the Logs panel automatically captures all traffic and displays it in real-time as requests come in.

πŸ“Accessing the Logs Panel

  1. The Logs panel is located at the bottom of the application
  2. Click on the "Logs" header to expand/collapse the panel
  3. The panel shows a count of total logs captured: Logs (47)
  4. You can resize the panel by dragging the top border up or down

πŸ“ŠWhat Gets Logged

Every HTTP request to your mock servers is automatically logged with the following information:

Timestamp:Exact time the request was received (HH:MM:SS)
Server Info:Server name and port (e.g., "My API:8021")
HTTP Method:GET, POST, PUT, DELETE, PATCH, OPTIONS, etc.
Request Path:Full URL path including query parameters
Status Code:HTTP response status (200, 404, 500, etc.)
Duration:Response time in milliseconds
Request Headers:All headers sent by the client
Request Body:Request payload (JSON, XML, text, etc.)
Response Headers:All headers returned by the mock server
Response Body:Response payload sent back to the client

πŸ”Inspecting Request Details

Click on any log entry to expand it and view full details. The expanded view provides four tabs:

Request Headers

View all headers sent by the client, including Content-Type, Authorization, User-Agent, etc. Useful for debugging authentication issues or content negotiation.

Request Body

Inspect the payload sent by the client. For JSON requests, you'll see the formatted JSON. This helps verify that clients are sending the correct data structure.

Response Headers

See all headers returned by your mock, including Content-Type, CORS headers, custom headers, etc. Verify that your response headers are configured correctly.

Response Body

View the exact response payload sent back to the client. Verify random variables were replaced, response body matches expectations, and JSON structure is correct.

🎨Visual Color Coding

Logs use color coding to help you quickly identify request types and response statuses:

HTTP Methods:

GETPOSTPUTDELETEPATCH

Status Codes:

2xxSuccess (green)
3xxRedirection (blue)
4xxClient Error (orange)
5xxServer Error (red)

πŸ’ΌReal-World Use Cases

1. Debugging Authentication Issues

When testing login flows, check the Request Headers to verify the Authorization token is being sent correctly. Check the Response Body to ensure your mock is returning the expected token format.

2. Verifying CORS Configuration

Look at Response Headers to confirm CORS headers (Access-Control-Allow-Origin, etc.) are present. Check for OPTIONS requests (preflight) in the logs to ensure they're handled correctly.

3. Testing Random Variables

Expand logs and view Response Body to verify random variables like {{random.uuid}} are being replaced with actual values. Each request should show different random data.

4. Monitoring Response Times

Check the Duration (in milliseconds) for each request. If you've configured a response delay, verify it's working by checking the duration matches your configured delay.

5. Validating Request Matching

When using query parameter matching or header matching, inspect logs to see which endpoint was matched. Check Request Headers and query parameters to verify the matching logic is working as expected.

⚑Key Features

  • βœ“Real-time Streaming: Logs appear instantly as requests come in, no refresh needed
  • βœ“Auto-scroll: Panel automatically scrolls to show the latest logs
  • βœ“Resizable Panel: Drag the top border to adjust panel height (150px - 500px)
  • βœ“Collapsible: Click the header to show/hide logs panel
  • βœ“Clear Logs: Remove all logs with a single click using the "Clear Logs" button
  • βœ“Multi-server Support: Logs from all running servers appear in a single unified view
  • βœ“Complete Request/Response Inspection: View every detail of HTTP transactions

Pro Tips

πŸ’‘ Keep the Logs panel open during development to monitor all API traffic in real-time

πŸ’‘ Use the timestamp to correlate logs with actions in your frontend application

πŸ’‘ Check status codes - orange/red indicates errors that need attention

πŸ’‘ Verify request bodies before investigating response issues - often the problem is in what the client sends

πŸ’‘ Clear logs periodically to focus on recent requests and improve performance

πŸ’‘ Use the duration to identify slow endpoints - if it's much higher than your configured delay, something's wrong

πŸ’‘ For WebSocket endpoints, check the logs for the initial HTTP upgrade request

Need more help?

Check our FAQ for common questions or reach out to the community

Visit FAQ