Wazuh 4.14 Troubleshooting - Diagnostic Guide
Troubleshooting Wazuh begins with identifying the faulty component and analyzing the relevant logs. This guide organizes common issues by component, provides diagnostic commands, and offers step-by-step resolution instructions. It covers the manager, agents, indexer, dashboard, and performance concerns.
Log File Locations
Diagnosing any Wazuh issue requires knowing where each component writes its logs.
| Component | Log File | Description |
|---|---|---|
| Wazuh Manager | /var/ossec/logs/ossec.log | Main manager log |
| Wazuh Manager | /var/ossec/logs/api.log | REST API log |
| Wazuh Manager | /var/ossec/logs/cluster.log | Manager cluster log |
| Wazuh Agent | /var/ossec/logs/ossec.log (Linux/macOS) | Agent log |
| Wazuh Agent | C:\Program Files (x86)\ossec-agent\ossec.log (Windows) | Windows agent log |
| Wazuh Indexer | /var/log/wazuh-indexer/wazuh-indexer.log | Indexer log |
| Wazuh Indexer | /var/log/wazuh-indexer/wazuh-indexer_deprecation.log | Deprecation warnings |
| Wazuh Dashboard | /var/log/wazuh-dashboard/opensearch_dashboards.log | Dashboard log |
| Filebeat | /var/log/filebeat/filebeat | Filebeat log |
Manager Fails to Start
Checking Service Status
systemctl status wazuh-manager
journalctl -u wazuh-manager -n 100Validating Configuration
The most common cause is an error in ossec.conf:
/var/ossec/bin/wazuh-control config-testIf the command returns an error, fix the indicated line in /var/ossec/etc/ossec.conf and rerun the test.
Common Configuration Errors
Unclosed XML tag:
ERROR: (1226): Error reading XML file '/var/ossec/etc/ossec.conf'Check the file for valid XML markup. Use xmllint for quick validation:
xmllint --noout /var/ossec/etc/ossec.confMissing rule or decoder file:
ERROR: (1202): Missing file '/var/ossec/etc/rules/custom_rule.xml'Verify that all files referenced in the <ruleset> section exist on disk.
Port conflicts:
ss -tlnp | grep -E '1514|1515|55000'If the ports are occupied by another process, adjust the settings in ossec.conf or terminate the conflicting process.
File Permission Issues
All files under /var/ossec/ must be owned by the wazuh user:
chown -R wazuh:wazuh /var/ossec/etc/rules/
chown -R wazuh:wazuh /var/ossec/etc/decoders/
systemctl restart wazuh-managerAgents Not Connecting
Agent connectivity issues are the most frequent category of support requests. Diagnose them systematically.
Verifying Network Connectivity
From the agent host, check that the manager ports are reachable:
# Registration port
nc -zv MANAGER_IP 1515
# Data exchange port
nc -zv MANAGER_IP 1514If the ports are unreachable, inspect the firewall rules on the manager:
# iptables
iptables -L -n | grep -E '1514|1515'
# firewalld
firewall-cmd --list-portsVerifying Agent Registration
On the manager, check whether the agent is registered:
/var/ossec/bin/manage_agents -lOr via the API:
TOKEN=$(curl -sk -u wazuh-wui:<PASSWORD> \
-X POST "https://localhost:55000/security/user/authenticate?raw=true")
curl -sk -H "Authorization: Bearer $TOKEN" \
"https://localhost:55000/agents?name=AGENT_NAME" \
| jq '.data.affected_items[]'Key Mismatch
If the agent is registered but not connecting, keys may be out of sync. On the agent:
cat /var/ossec/etc/client.keysOn the manager:
grep "AGENT_NAME" /var/ossec/etc/client.keysThe keys must match. If they differ, remove the agent and re-register:
# On the manager
/var/ossec/bin/manage_agents -r AGENT_ID
# On the agent - repeat the registration procedureSSL Certificate Issues
When using automatic registration (authd), verify the certificates:
# On the manager
openssl x509 -in /var/ossec/etc/sslmanager.cert -text -noout | grep "Not After"An expired certificate prevents new agent registrations.
Agent Log Analysis
On the agent host, inspect the log:
# Linux
tail -50 /var/ossec/logs/ossec.log
# Windows (PowerShell)
Get-Content "C:\Program Files (x86)\ossec-agent\ossec.log" -Tail 50Common error messages:
| Message | Cause | Resolution |
|---|---|---|
Unable to connect to MANAGER_IP:1514 | Network or firewall | Check routing and firewall rules |
Invalid key | Key mismatch | Re-register the agent |
Manager not found | Wrong manager address | Check the agent’s ossec.conf |
Agent key not found | Agent not registered | Register the agent on the manager |
Indexer Issues
Indexer Fails to Start
systemctl status wazuh-indexer
journalctl -u wazuh-indexer -n 100
cat /var/log/wazuh-indexer/wazuh-indexer.log | tail -50Insufficient Disk Space
The indexer stops writing when disk usage exceeds the threshold (95% by default):
df -h /var/lib/wazuh-indexer/To free space, delete old indices:
# List indices by size
curl -sk -u admin:<PASSWORD> \
"https://localhost:9200/_cat/indices/wazuh-alerts-*?v&s=store.size:desc"
# Delete old indices
curl -sk -u admin:<PASSWORD> \
-X DELETE "https://localhost:9200/wazuh-alerts-4.x-2024.01.*"Remove the write block after freeing space:
curl -sk -u admin:<PASSWORD> \
-X PUT "https://localhost:9200/_cluster/settings" \
-H "Content-Type: application/json" \
-d '{
"persistent": {
"cluster.routing.allocation.disk.watermark.flood_stage": "95%",
"cluster.routing.allocation.disk.watermark.high": "90%",
"cluster.routing.allocation.disk.watermark.low": "85%"
}
}'
curl -sk -u admin:<PASSWORD> \
-X PUT "https://localhost:9200/_all/_settings" \
-H "Content-Type: application/json" \
-d '{"index.blocks.read_only_allow_delete": null}'JVM Issues
OutOfMemoryError:
Increase the heap size in /etc/wazuh-indexer/jvm.options:
-Xms4g
-Xmx4gRecommendation: allocate no more than 50% of the host’s RAM, with a maximum of 32 GB.
systemctl restart wazuh-indexerCluster in Red Status
# Cluster health
curl -sk -u admin:<PASSWORD> \
"https://localhost:9200/_cluster/health?pretty"
# Unassigned shards
curl -sk -u admin:<PASSWORD> \
"https://localhost:9200/_cat/shards?v&h=index,shard,prirep,state,unassigned.reason" \
| grep UNASSIGNEDCommon causes:
- A cluster node is unreachable - verify the status of all nodes
- Insufficient disk space for shard placement - free disk space
- Corrupted index data - restore from a snapshot
Dashboard Issues
Dashboard Not Loading
systemctl status wazuh-dashboard
cat /var/log/wazuh-dashboard/opensearch_dashboards.log | tail -50Cannot Connect to the Indexer
The Dashboard fails to reach the indexer:
FATAL Error: connect ECONNREFUSED 127.0.0.1:9200Confirm that the indexer is running and accessible:
systemctl status wazuh-indexer
curl -sk -u admin:<PASSWORD> "https://localhost:9200/"Check the connection settings in /etc/wazuh-dashboard/opensearch_dashboards.yml:
opensearch.hosts: ["https://localhost:9200"]
opensearch.ssl.verificationMode: certificate
opensearch.username: "kibanaserver"
opensearch.password: "<PASSWORD>"SSL Certificate Errors
Error: unable to verify the first certificateVerify the certificate paths and expiration dates:
grep "server.ssl" /etc/wazuh-dashboard/opensearch_dashboards.yml
openssl x509 -in /etc/wazuh-dashboard/cert.pem -text -noout | grep "Not After"Wazuh Plugin Displays an Error
If the Dashboard loads but the Wazuh plugin shows an error:
- Check the plugin configuration:
cat /usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml- Confirm that the manager API is reachable:
curl -sk -u wazuh-wui:<PASSWORD> \
"https://localhost:55000/manager/info" | jq '.data'Performance Issues
High Manager CPU Usage
Check the events-per-second rate:
# analysisd statistics
cat /var/ossec/var/run/wazuh-analysisd.state
# remoted statistics
cat /var/ossec/var/run/wazuh-remoted.stateSteps to reduce the load:
- Exclude noisy sources through agent-level filtering
- Increase the file integrity monitoring (FIM) scan interval
- Disable unused modules in the manager configuration
Slow Indexer Queries
# Threads exceeding 5-second thresholds
curl -sk -u admin:<PASSWORD> \
"https://localhost:9200/_nodes/hot_threads"
# Index statistics
curl -sk -u admin:<PASSWORD> \
"https://localhost:9200/_cat/indices/wazuh-alerts-*?v&s=docs.count:desc&h=index,docs.count,store.size"Recommendations:
- Configure an Index State Management (ISM) policy for index lifecycle
- Increase the JVM heap (up to 50% of RAM)
- Add nodes to the indexer cluster
Alert Delivery Latency
If there is a significant delay between an event occurring and the alert appearing in the Dashboard:
- Check the analysisd queue:
cat /var/ossec/var/run/wazuh-analysisd.state | grep queue- Check Filebeat:
filebeat test output
cat /var/log/filebeat/filebeat | tail -20- Check the indexing backlog:
curl -sk -u admin:<PASSWORD> \
"https://localhost:9200/_cat/thread_pool/write?v&h=node_name,active,queue,rejected"Debug Mode
For more detailed diagnostics, enable debug mode on the relevant component.
Manager Debug Mode
In /var/ossec/etc/internal_options.conf, set:
analysisd.debug=2
remoted.debug=2Restart the manager:
systemctl restart wazuh-managerThe log will contain significantly more detail. Remember to disable debug mode after troubleshooting to reduce the performance overhead.
Agent Debug Mode
In /var/ossec/etc/internal_options.conf (or local_internal_options.conf) on the agent host:
agent.debug=2
logcollector.debug=2systemctl restart wazuh-agentIndexer Debug Mode
In /etc/wazuh-indexer/opensearch.yml:
logger.level: debugsystemctl restart wazuh-indexerCollecting Diagnostic Data
When contacting support or the community, prepare the following data:
# Component versions
rpm -qa | grep wazuh # or dpkg -l | grep wazuh
# Service status
systemctl status wazuh-manager wazuh-indexer wazuh-dashboard filebeat
# Manager configuration (without passwords)
cat /var/ossec/etc/ossec.conf | grep -v -i password
# Last 200 lines of logs
tail -200 /var/ossec/logs/ossec.log > /tmp/wazuh-diag-ossec.log
tail -200 /var/log/wazuh-indexer/wazuh-indexer.log > /tmp/wazuh-diag-indexer.log
# Cluster health
curl -sk -u admin:<PASSWORD> \
"https://localhost:9200/_cluster/health?pretty" > /tmp/wazuh-diag-cluster.json
# System information
uname -a
free -h
df -hCollect the files from /tmp/wazuh-diag-* and attach them to your support request.
Related Sections
- Upgrading Wazuh - issues after upgrading
- Backup and Recovery - restoring after failures
- Server API - API-based diagnostics
- Indexer Cluster - cluster issues