🔍 MATRIX Chat Flow Debug
Complete Message Flow Analysis from Submit Button to AI Response
Debug Purpose: This page traces the complete execution flow when a user hits the "Smart Send" button
on the MATRIX web interface, showing every function call, database operation, and process that occurs.
This helps identify where issues may occur and understand the current system architecture.
USER CLICKS "Smart Send" BUTTON
↓
JavaScript sendMessage()
↓
POST /gallery/matrix/api/conversation/{id}/message
↓
Python send_message() function
↓
Store Message in Database
↓
Process through Cortex Engine
↓
Trigger AI Responses (Async)
↓
Each AI: generate_ai_response()
↓
Store AI Response + Cortex Processing
↓
Trigger Bouncing Responses
↓
Loop until bounce limit or no more routing
📋 Detailed Step-by-Step Flow
1
Frontend: User Interaction
sendMessage()
matrix.html lines 1498-1562
What happens:
- User types message and clicks "Smart Send" button
- JavaScript validates message and conversation selection
- Determines recipient list (ALL vs specific recipients)
- Prepares POST request with JSON payload
Key Data Sent:
{
entity_name: 'Human',
message: userTypedMessage,
recipients: recipientList,
force_send: false
}
JavaScript Logic:
Makes fetch() call to /gallery/matrix/api/conversation/{conversation_id}/message
with POST method and JSON body.
2
Backend: Message Reception
send_message(conversation_id)
matrix_bp.py lines 1080-1241
What happens:
- Flask route receives POST request
- Extracts JSON data (entity_name, message, recipients, force_send)
- Validates required fields
- Gets database connection
- Calculates next response_round number
Database Operations:
- Call
get_next_response_round(conversation_id)
- Store RAW message in matrix_communications table
- Get message_id from lastrowid
3
Cortex Engine Processing (Human Message)
process_through_cortex_engine()
cortex_engine.py lines 480+
Cortex Engine Processing:
- Scans message for action tags (~<SQL>, ~<ECHO_TO_AI>, etc.)
- Executes any found SQL commands
- Processes web requests
- Handles data storage commands
- Generates processing log
- Returns processed message + log
Database Update:
Updates matrix_communications with processed message and engine_processing_log
Critical Change: This unified processing replaces the old dual-processing system
where both Cortex Engine AND legacy command processing would run on the same message, causing conflicts.
4
Conversation Management
Various management functions
matrix_bp.py lines 1140-1170
What happens:
- Update conversation.last_activity timestamp
- Get conversation participants from database
- Reset bounce counter (starts new conversation thread)
- Commit the human message to database
reset_bounce_counter(conversation_id)
matrix_bp.py lines 46-49
5
Routing Decision Logic
Multiple routing paths based on recipients:
Path A: Force Send
- If force_send=true: Send directly to specified recipients
- Bypass Matrix_Coordinator
- Call trigger_ai_responses() immediately
Path B: Direct Message to Specific AI
- If single recipient (not "ALL") and recipient in participants
- Special case: Cortex instructions get routed to ALL
- Otherwise: Direct message, bypass coordinator
Path C: Smart Routing via Matrix_Coordinator
- If recipients=["ALL"] or multiple recipients
- Call Matrix_Coordinator for intelligent routing
- Based on participation states and message analysis
get_coordinator_routing_decision()
matrix_bp.py lines 458-529
6
AI Response Triggering (Async)
trigger_ai_responses(conversation_id, message, sender, participants)
matrix_bp.py lines 803-838
Threading: AI responses are generated in a background thread to prevent blocking the main HTTP response.
This allows the web interface to remain responsive while AIs process and respond.
What happens:
- Creates background thread for AI response processing
- 2-second delay to ensure human message is stored
- Loops through each selected participant
- Calls generate_ai_response() for each AI
- Increments bounce counter after each AI response
- Triggers bouncing responses for conversation flow
7
Individual AI Response Generation
generate_ai_response(entity_name, conversation_id, trigger_message)
matrix_bp.py lines 312-457
AI Response Process:
- Get conversation history via get_conversation_history()
- Build AI prompt via build_ai_prompt()
- Call OpenAI ChatGPT API (or fallback if no API key)
- Store RAW AI response in database
- Process through Cortex Engine (same as human messages)
- Update database with processed response
- Add interaction memory to CORTEX system
- Commit all changes in transaction
AI Response Cortex Processing:
Each AI response also goes through process_through_cortex_engine() to handle any action tags
the AI might include in their response (SQL queries, echo commands, etc.)
build_ai_prompt(entity_name, history, new_message)
matrix_bp.py lines 238-277
get_entity_personality(entity_name)
matrix_bp.py lines 189-237
8
Bouncing Response System
trigger_bouncing_responses(conversation_id, ai_response, responding_ai)
matrix_bp.py lines 839-913
Bouncing Logic:
- After each AI responds, check current bounce count
- If bounce_count >= 10: Stop bouncing, generate alignment summary
- Otherwise: Use Matrix_Coordinator to route AI's response to other AIs
- This creates a conversational "bouncing" effect between AIs
- Each bounce triggers more AI responses until limit reached
Parallel Processing:
Multiple AIs can be responding simultaneously, each in their own execution path.
The bounce counter tracks the total conversation activity.
generate_coordinator_alignment_summary()
matrix_bp.py lines 2335+ (when bounce limit reached)
9
Frontend Response & Polling
JavaScript response handling
matrix.html lines 1540-1562
What happens:
- Original HTTP request returns with success/failure status
- Frontend clears message input and calls loadMessages()
- Shows success message to user
- Increases polling frequency to 2000ms for 20 seconds (to catch AI responses)
- Then returns to normal 5000ms polling
refreshMessages() / loadMessages()
matrix.html (polling system)
🔧 Current System Issues Identified
Issue 1: Dual Processing Eliminated ✅
Problem: Previously both Cortex Engine AND legacy process_sql_commands_in_message() would run on the same message.
Solution: Now only Cortex Engine processes messages. Legacy functions are disabled with comment: "Note: Legacy process_sql_commands_in_message() removed - all processing now handled by Cortex Engine"
Issue 2: Database Connection Management
Problem: Multiple functions create their own database connections instead of reusing.
Impact: Connection pool exhaustion, potential deadlocks.
Location: Throughout matrix_bp.py - each function calls get_db_connection()
Issue 3: Transaction Management
Problem: Operations not properly wrapped in transactions.
Impact: Partial failures can leave database in inconsistent state.
Example: In generate_ai_response(), multiple database operations should be in single transaction.
Issue 4: Threading and Error Handling
Problem: Background threads for AI responses may fail silently.
Impact: Users don't know if AI response generation failed.
Location: trigger_ai_responses() threading.Thread with daemon=True
📊 Function Call Summary
Frontend (matrix.html):
sendMessage() → fetch() POST request
Backend (matrix_bp.py):
send_message() → Main entry point
├── get_next_response_round()
├── process_through_cortex_engine() [cortex_engine.py]
├── reset_bounce_counter()
├── get_coordinator_routing_decision()
│ ├── get_ai_participation_states()
│ └── get_gpt_routing_decision()
└── trigger_ai_responses() [ASYNC THREAD]
└── For each AI:
├── generate_ai_response()
│ ├── get_conversation_history()
│ ├── build_ai_prompt()
│ │ └── get_entity_personality()
│ ├── OpenAI API call
│ ├── process_through_cortex_engine() [AGAIN]
│ └── Database storage
├── increment_bounce_counter()
└── trigger_bouncing_responses()
└── [RECURSIVE LOOP until bounce limit]
🎯 Recommended Next Steps
- Database Connection Pooling: Implement connection reuse across function calls
- Transaction Boundaries: Wrap related operations in proper transactions
- Error Reporting: Add proper error handling and user feedback for async operations
- Performance Monitoring: Add timing logs to identify bottlenecks
- Testing: Create automated tests for the complete message flow
Typical Timing:
- Human message processing: ~100-500ms
- Individual AI response: ~2-8 seconds (depending on OpenAI API)
- Complete conversation round: ~10-30 seconds (with bouncing)
📝 Function Call Summary (In Execution Order)
Complete Function Call List: All functions called during message processing, in order of execution.
⚠️ CLEANUP ANALYSIS: Functions marked with * are NOT being used and may be candidates for removal.
File.Function |
Purpose |
matrix.html.sendMessage() |
Frontend: Validates user input and sends POST request to backend |
matrix_bp.py.send_message() |
Main backend entry point: Receives message and orchestrates processing |
matrix_bp.py.get_next_response_round() |
Calculates sequential response round number for message ordering |
cortex_engine.py.process_through_cortex_engine() |
Processes human message for action tags (SQL, ECHO_TO_AI, etc.) |
matrix_bp.py.reset_bounce_counter() * |
⚠️ USES GLOBAL VARIABLE: Resets in-memory bounce count - but actual bounce tracking done via database queries. CANDIDATE FOR REMOVAL. |
matrix_bp.py.get_coordinator_routing_decision() |
Determines which AIs should receive the message based on routing logic |
matrix_bp.py.get_ai_participation_states() |
Gets AI participation modes (always_active, on_demand, inactive) |
matrix_bp.py.get_gpt_routing_decision() |
Uses GPT to intelligently select AIs for on_demand routing |
matrix_bp.py.trigger_ai_responses() (RECURSIVE) |
Starts async AI response generation, triggers bouncing conversations |
matrix_bp.py.generate_ai_response() |
Generates individual AI response using OpenAI API |
matrix_bp.py.get_conversation_history() |
Retrieves recent conversation messages for AI context |
matrix_bp.py.build_ai_prompt() |
Constructs complete system prompt with personality + Cortex commands |
matrix_bp.py.get_entity_personality() |
Loads AI personality, directives, and thoughts from CORTEX database |
matrix_bp.py.get_cortex_commands_section() |
Returns standardized Cortex action commands for all AI prompts |
cortex_engine.py.process_through_cortex_engine() (AGAIN) |
Processes AI response for action tags before storing in database |
matrix_bp.py.increment_bounce_counter() * |
⚠️ USES GLOBAL VARIABLE: Increments in-memory counter - but actual bounce tracking done via database queries. CANDIDATE FOR REMOVAL. |
matrix_bp.py.trigger_bouncing_responses() (RECURSIVE) |
Routes AI responses to other AIs, creates conversation bouncing until limit |
matrix_bp.py.generate_coordinator_alignment_summary() |
Creates summary when bounce limit reached (≥10 bounces) |
🗑️ LEGACY FUNCTIONS - NOT USED IN CURRENT FLOW
The following functions are defined but NOT called in the current system:
Legacy Function |
Replacement / Status |
matrix_bp.py.await_process_commands() * |
REPLACED: All command processing now handled by Cortex Engine. Lines 1496+ can be removed. |
matrix_bp.py.process_sql_commands_in_message() * |
REPLACED: SQL processing now handled by Cortex Engine. Lines 1442+ can be removed. |
matrix_bp.py.process_inline_sql_commands() * |
REPLACED: Inline SQL now handled by Cortex Engine. Lines 1554+ can be removed. |
matrix_bp.py.process_data_storage_commands() * |
REPLACED: Data storage now handled by Cortex Engine. Lines 1604+ can be removed. |
matrix_bp.py.process_web_request_commands() * |
REPLACED: Web requests now handled by Cortex Engine. Lines 1644+ can be removed. |
CONVERSATION_BOUNCE_COUNTERS global variable * |
INCONSISTENT: Global variable used but bounce limits checked via database queries. Should standardize on one approach. |
🔍 BOUNCE COUNTER INCONSISTENCY ANALYSIS
Critical Issue Found: The system has TWO different bounce counting methods:
- Global Variable:
CONVERSATION_BOUNCE_COUNTERS = {}
(lines 43-59)
- Database Tracking: Via counting entries in
matrix_communications
table
Current Usage:
reset_bounce_counter()
- Updates global variable
increment_bounce_counter()
- Updates global variable
get_bounce_counter()
- Reads from global variable
- But manifesto suggests database-based counting: "Complete history: Available to AIs in subsequent rounds"
Recommendation: Choose ONE approach - either pure database counting or pure global variable. Global variables are lost on server restart.
🔄 Recursive Functions:
- trigger_ai_responses() - Called recursively through bouncing system
- trigger_bouncing_responses() - Calls trigger_ai_responses() again for each AI response
- generate_ai_response() - Called for each AI participant (can be multiple simultaneously)
The recursion stops when bounce_counter reaches 10 or no more AIs are selected for routing.
🧠 System Prompt & Bootstrap Scripts
Cortex Commands Integration Script
Source Function: matrix_bp.py.get_cortex_commands_section()
(lines 169-188)
This function provides the standardized Cortex commands that are injected into every AI system prompt. It's called by build_ai_prompt()
to ensure all AIs have access to action tags.
def get_cortex_commands_section():
"""Get standardized Cortex Commands section for ALL AI system prompts"""
return """
CORTEX COMMANDS (Available to ALL AIs):
~<DIRECT_REPLY entity="EntityName" message="your response">~ - Send direct private message to specific entity
~<SAVE_MEMORY importance="0.1-1.0" content="memory text">~ - Save important information to your memory
~<SAVE_THOUGHT ranking="0.1-1.0" content="thought text">~ - Record a thought or insight
~<APPEND_PERSONALITY content="new trait or behavior">~ - Add to your personality description
~<ECHO_TO_AI entity="Cortex" message="feedback">~ - Send feedback/analysis to Cortex for coordination
~<SQL>query here~ - Execute safe SQL operations
CORTEX INTERACTION PROTOCOL:
- When Cortex asks direct questions, always include ~<ECHO_TO_AI entity="Cortex" message="response">~
- Use ~<DIRECT_REPLY>~ for private communications that shouldn't be seen by others
- Regularly save important insights with ~<SAVE_MEMORY>~ and ~<SAVE_THOUGHT>~
- Evolve your personality based on interactions using ~<APPEND_PERSONALITY>~
CHARACTER CONSISTENCY REQUIREMENT:
You MUST stay in character as {entity_name} at all times. Do not break character, reference being an AI language model, or discuss your training. Respond authentically as your defined personality would respond."""
Complete AI System Prompt Template
Source Function: matrix_bp.py.build_ai_prompt()
(lines 238-277)
This is the complete template used for every AI response, combining personality data from CORTEX with standardized commands.
SYSTEM PROMPT TEMPLATE:
======================
You are {entity_name}, an AI entity with the following characteristics:
PERSONALITY:
{personality_from_cortex_database}
KEY DIRECTIVES:
- {directive_1}
- {directive_2}
- {directive_3}
RECENT THOUGHTS:
- {thought_1}
- {thought_2}
- {thought_3}
{additional_system_directives_if_any}
{cortex_commands_section_from_above}
CONVERSATION HISTORY:
{recent_conversation_messages}
NEW MESSAGE TO RESPOND TO:
{trigger_message}
Respond as {entity_name} would, staying true to your personality, directives, and thoughts. Keep responses conversational but meaningful while maintaining your character at all times.
Bootstrap Data Sources
CORTEX Database Tables Used for AI Bootstrap:
- cortex_personalities: Main personality description and creation info
- cortex_thoughts_unified (directives): Top 5 behavioral directives (by rating)
- cortex_thoughts_unified (thoughts): Top 3 recent thoughts (by rating)
- matrix_communications: Recent conversation history (last 10 messages)
Data Retrieval: The get_entity_personality()
function (lines 189-237) fetches all this data and combines it into the system prompt.
🔗 Integration Points
- Human Messages: Processed through Cortex Engine for action tags BEFORE AI responses
- AI Responses: Generated with full CORTEX personality data, then processed through Cortex Engine AFTER generation
- Action Tags: Both humans and AIs can use Cortex commands in their messages
- Memory Integration: AI interactions automatically saved as memories in CORTEX system
- Bootstrap Continuity: Every AI response includes their latest personality, directives, and thoughts from CORTEX
🔍 MATRIX Chat Flow Debug - Complete System Analysis
Use this information to identify and fix system bottlenecks and issues