Upgrade from the same major version (3.x)
The following steps show how to upgrade to the latest available version of Wazuh 3.x (which implies upgrading to the latest version of Elastic Stack 6.x).
Starting the upgrade
If you followed our manager or agents installation guides, probably you disabled the repository in order to avoid undesired upgrades. It’s necessary to enable them again to get the last packages.
#sed –i“s/^enabled=0/enabled=1/” /etc/yum.repos.d/wazuh.repo
Upgrade the Wazuh manager
Note
Since Wazuh v3.7.0 the File Integrity Monitoring database is not used anymore. In order to add to Wazuh DB the file and registry entries stored from previous versions it’s necessary to run the FIM migration tool.
- Upgrade the wazuh-manager package:
# yum upgrade wazuh-manager
- Upgrade the wazuh-api package:
# yum upgrade wazuh-api
Note
The installation of the updated packages will automatically restart the services for the Wazuh manager, API and agents. Your Wazuh config file will keep unmodified, so you’ll need to manually add the settings for the new capabilities. Check the User Manual for more information.
Finishing the Wazuh upgrade
You’ve finished upgrading your Wazuh installation to the latest version. Now you can disable again the Wazuh repositories in order to avoid undesired upgrades and compatibility issues.
#sed –i“s/^enabled=1/enabled=0/” /etc/yum.repos.d/wazuh.repo
Upgrade to the latest Elastic Stack version
Since the release of Wazuh 3.0.0, there’s been several updates to the 6.x version of the Elastic Stack, introducing several bugfixes and important changes. In order to use the latest version of Wazuh, it’s necessary to install the latest compatible Elastic Stack packages.
#systemctl stop filebeat (This you do not need to stop in a standalone setup, because it should not be installed. Filebeat is only used when you have a clustered setup. It sends logs back to the manager when clustered)
#systemctl stop logstash
#systemctl stop kibana
#systemctl stop elasticsearch
- Enable the Elastic repository:
If you followed our Elastic Stack Installation Guide, probably you disabled the repository in order to avoid undesired upgrades for the Elastic Stack. It’s necessary to enable them again to get the last packages.
#sed –i“s/^enabled=0/enabled=1/” /etc/yum.repos.d/elastic.repo
Upgrade Elasticsearch
- Upgrade the elasticsearch package:
# yum install elasticsearch-6.5.1
- Start the Elasticsearch service:
#systemctl daemon-reload
#systemctlenableelasticsearch.service
#systemctl start elasticsearch.service
It’s important to wait until the Elasticsearch server finishes starting. Check the current status with the following command, which should give you a response like the shown below:
# curl "http://localhost:9200/?pretty"
{
"name" : "Zr2Shu_",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "M-W_RznZRA-CXykh_oJsCQ",
"version" : {
"number" : "6.5.1",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "053779d",
"build_date" : "2018-07-20T05:20:23.451332Z",
"build_snapshot" : false,
"lucene_version" : "7.3.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Updating the Elasticsearch template to the latest version is mandatory in order to avoid compatibility issues with the latest versions of Wazuh and the Elastic Stack.
# curl https://raw.githubusercontent.com/wazuh/wazuh/3.7/extensions/elasticsearch/wazuh-elastic6-template-alerts.json | curl -X PUT "http://localhost:9200/_template/wazuh" -H 'Content-Type: application/json' -d @-
Upgrade Logstash
- Upgrade the logstash package:
- For CentOS/RHEL/Fedora:
# yum install logstash-6.5.1
- Download and set the Wazuh configuration for Logstash:
- Local configuration:
# cp /etc/logstash/conf.d/01-wazuh.conf /backup_directory/01-wazuh.conf.bak
# curl -so /etc/logstash/conf.d/01-wazuh.conf https://raw.githubusercontent.com/wazuh/wazuh/3.7/extensions/logstash/01-wazuh-local.conf
# usermod -a -G ossec logstash
- Remote configuration: (We are not using this in our standalone setup and therefore do not need to run this)
# cp /etc/logstash/conf.d/01-wazuh.conf /backup_directory/01-wazuh.conf.bak
# curl -so /etc/logstash/conf.d/01-wazuh.conf https://raw.githubusercontent.com/wazuh/wazuh/3.7/extensions/logstash/01-wazuh-remote.conf
- Start the Logstash service:
#systemctl daemon-reload
#systemctlenablelogstash.service
#systemctl start logstash.service
Note
The Logstash configuration file has been replaced for an updated one. If you already configured the encryption between Filebeat and Logstash, don’t forget to check again Setting up SSL for Filebeat and Logstash if you’re using a distributed architecture.
Upgrade Kibana
- Upgrade the kibana package:
# yum install kibana-6.5.1
- Uninstall the Wazuh app from Kibana:
- Update file permissions. This will avoid several errors prior to updating the app:
#chown -R kibana:kibana /usr/share/kibana/optimize
#chown -R kibana:kibana /usr/share/kibana/plugins
#sudo -u kibana /usr/share/kibana/bin/kibana-plugin remove wazuh
# rm -rf /usr/share/kibana/optimize/bundles
#sudo -u kibanaNODE_OPTIONS=“–max-old-space-size=3072” /usr/share/kibana/bin/kibana-plugin install https://packages.wazuh.com/wazuhapp/wazuhapp-3.7.1_6.5.1.zip
Warning
The Wazuh app installation process may take several minutes. Please wait patiently.
- Start the Kibana service:
#systemctl daemon-reload
#systemctlenablekibana.service
#systemctl start kibana.service
This section only applies if you have clustered/distributed setup
Upgrade Filebeat
- Upgrade the filebeat package:
# yum install filebeat-6.5.1
- Start the Filebeat service:
#systemctl daemon-reload
#systemctlenablefilebeat.service
#systemctl start filebeat.service
Finishing the Elastic Stack upgrade
You’ve finished upgrading your Wazuh installation to the latest version. Now you can disable again the Elastic Stack repositories in order to avoid undesired upgrades and compatibility issues with the Wazuh app.
#sed –i“s/^enabled=1/enabled=0/” /etc/yum.repos.d/elastic.repo
Things you will need to fix after the upgrade
1. Running migration tool for versions before 3.7 for that have upgraded recently:
-
If you upgraded from wazuh 3.6 or newer you will need to run the following migration tool, which migrate the database into a new format for wazuh 3.7......When they first introduced the tool it had some fail to exit code if it couldn’t decode a line and it would halt the migration. They have since fixed that, however it look something like this.
2018-11-12 15:45:38 [INFO] [32/239] Added 3339 file entries in agent ‘033’ database.
2018-11-12 15:45:38 [INFO] Setting FIM database for agent ‘033’ as completed…
2018-11-12 15:45:38 [INFO] [33/239] Upgrading FIM database for agent ‘034’…
2018-11-12 15:45:38 [INFO] [33/239] Added 61 file entries in agent ‘034’ database.
2018-11-12 15:45:38 [INFO] [33/239] Upgrading FIM database (syscheck-registry) for agent ‘034’…
2018-11-12 15:45:38 [ERROR] Couldn’t decode line at syscheck database.
Traceback (most recent call last):
File “./fim_migrate“, line 320, in <module>
if not check_file_entry(agt[0], decoded[2], s):
File “./fim_migrate“, line 91, in check_file_entry
msg = msg + cfile + b”‘;”
TypeError: cannot concatenate ‘str’ and ‘NoneType‘ objects
working migration tool below
https://raw.githubusercontent.com/wazuh/wazuh/3.7/tools/migration/fim_migrate.py
2. error “api version type mismatch 3.6.1′‘ :
- Next run this on the server side to confirm they match
- cat /usr/share/kibana/plugins/wazuh/package.json | grep –i -E “version|revision“
“version”: “3.7.0”,
“revision”: “0413”,
“version”: “6.4.3”
If all those match then you simply need to do the following to fix it.
- Delete the .wazuh-version index:
curl -XDELETE http://elastic_ip:9200/.wazuh-version
systemctl restart kibana
Wait for about 30 s – 1 min and now open a new window in your browser, then you should navigate without any more troubles regarding the version mismatching.
Notes: The Wazuh app creates that index when you restart Kibana if it’s not present. … If your standalone setup is using localhost then the curl command should be localhost and not the elastic ip.
3. Items listed per screen when listing agents will default back to 17 items for screen and is extremely annoying. You will need to fix this in the following manner.:
# systemctl stop kibana
Let’s open the file under /usr/share/kibana/plugins/wazuh/public/templates/agents-prev/agents-prev.html and look for lines 103-109:
<wz-table… flex… path=“‘/agents'”… keys=“[‘id’,{value:’name’,size:2},’ip’,’status’,’group’,’os.name‘,’os.version’,’version’]”… allow-click=“true”… row-sizes=“[17,15,13]”>…</wz-table>
……The wz-table tag is related to a Wazuh custom directive which has parameters to easy change that limit.
Replace [17,15,13] by your desired size [50,50,50], where each value refers to different screen sizes. Use 50 for all screen sizes,
and you’ll see 50 agents per page regardless your screen size. Use your desired value, it can be 100 or 150…
My suggestion is to don’t increase more than 50 (Angular performance reasons).
Once you are done save and close the file. Now remove old bundles and check the permissions again:
rm –rf /usr/share/kibana/optimize/bundles…chown –R kibana:kibana /usr/share/kibana/optimize…chown –R kibana:kibana /usr/share/kibana/plugins
Restart Kibana:
# systemctl restart kibana
It takes a few of minutes until it’s completed, you can check the status using the next command:
# systemctl status kibana -l
You’ll see “Optimizing…”, once you see “App ready to be used” you can remove cache/cookies from your browser and type your App address for accessing it.
4. Errors in wazuh log after upgrade [FORBIDDEN/12/index read-only / allow delete (api)];”}:
- If you see the following your wazuh.log
- tail -n500 /usr/share/kibana/optimize/wazuh-logs/wazuhapp.log
{“date”:”2018-11-22T14:24:15.613Z”,”level”:”info”,”location”:”[monitoring][init]”,”message”:”Checking if wazuh-monitoring pattern exists…”}
{“date”:”2018-11-22T14:24:15.625Z”,”level”:”error”,”location”:”[initialize][checkKnownFields]”,”message”:”[cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];”}
{“date”:”2018-11-22T14:24:15.632Z”,”level”:”info”,”location”:”[monitoring][init]”,”message”:”Updating known fields for wazuh-monitoring pattern…”}
{“date”:”2018-11-22T14:24:15.646Z”,”level”:”info”,”location”:”[monitoring][init]”,”message”:”Didn’t find wazuh-monitoring pattern for Kibana v6.x. Proceeding to create it…”}
{“date”:”2018-11-22T14:24:15.650Z”,”level”:”info”,”location”:”[monitoring][createWazuhMonitoring]”,”message”:”No need to delete old wazuh-monitoring pattern.”}
{“date”:”2018-11-22T14:24:15.650Z”,”level”:”info”,”location”:”[monitoring][configureKibana]”,”message”:”Creating index pattern: wazuh-monitoring-3.x-*”}
{“date”:”2018-11-22T14:24:15.658Z”,”level”:”info”,”location”:”[initialize][checkAPIEntriesExtensions]”,”message”:”Successfully updated API entry extensions with ID: 1535484412304″}
{“date”:”2018-11-22T14:24:15.660Z”,”level”:”error”,”location”:”[monitoring][configureKibana]”,”message”:”[cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];”
-
- This usually means that one of your partitions is near full and wazuh goes into read only mode because of this, super annoying…..
To fix this you must :
-
- first add diskspace to your lvm, if you don’t know how to do this look it up…haha 😛
- The you must go into the kibana interface and under under dev tools run the following
- PUT wazuh–monitoring-*/_settings…{… “index”: {… “blocks”: {… “read_only_allow_delete“: “false”… }… }…}
-
- ……
- Make sure to restart kibana:
Once kibana is restarted the log should look show something like this.
- tail -n500 /usr/share/kibana/optimize/wazuh-logs/wazuhapp.log
{“date”:”2018-11-23T00:00:02.464Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
Note: Initially you might only see on entry for that day, however after a few days the logs will look like this.
{“date”:”2018-11-22T14:25:09.166Z”,”level”:”error”,”location”:”[monitoring][configureKibana]”,”message”:”[cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];”}
{“date”:”2018-11-23T00:00:02.464Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
{“date”:”2018-11-24T00:00:01.894Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
{“date”:”2018-11-25T00:00:02.055Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
{“date”:”2018-11-26T00:00:01.983Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
{“date”:”2018-11-27T00:00:02.785Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
{“date”:”2018-11-28T00:00:02.458Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
{“date”:”2018-11-29T00:00:02.163Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
{“date”:”2018-11-29T14:41:46.871Z”,”level”:”info”,”location”:”[initialize]”,”message”:”Kibana index: .kibana“}
{“date”:”2018-11-29T14:41:46.874Z”,”level”:”info”,”location”:”[initialize]”,”message”:”App revision: 0413″}
{“date”:”2018-11-29T14:41:46.874Z”,”level”:”info”,”location”:”[monitoring][configuration]”,”message”:”wazuh.monitoring.enabled: true”}
{“date”:”2018-11-29T14:41:46.874Z”,”level”:”info”,”location”:”[monitoring][configuration]”,”message”:”wazuh.monitoring.frequency: 3600 (0 */60 * * * *) “}
{“date”:”2018-11-29T14:41:46.874Z”,”level”:”info”,”location”:”[monitoring][checkKibanaStatus]”,”message”:”Waiting for Kibana and Elasticsearch servers to be ready…”}
{“date”:”2018-11-29T14:41:48.241Z”,”level”:”info”,”location”:”[initialize][checkWazuhIndex]”,”message”:”Checking .wazuh index.”}
{“date”:”2018-11-29T14:41:48.241Z”,”level”:”info”,”location”:”[initialize][checkWazuhVersionIndex]”,”message”:”Checking .wazuh-version index.”}
{“date”:”2018-11-29T14:41:48.246Z”,”level”:”info”,”location”:”[monitoring][init]”,”message”:”Creating/Updating wazuh-agent template…”}
{“date”:”2018-11-29T14:41:48.246Z”,”level”:”info”,”location”:”[monitoring][checkTemplate]”,”message”:”Updating wazuh-monitoring template…”}
{“date”:”2018-11-29T14:41:48.945Z”,”level”:”info”,”location”:”[initialize][checkKnownFields]”,”message”:”x-pack enabled: no”}
{“date”:”2018-11-29T14:41:48.999Z”,”level”:”info”,”location”:”[initialize][checkKnownFields]”,”message”:”Found 2 index patterns”}
{“date”:”2018-11-29T14:41:48.999Z”,”level”:”info”,”location”:”[initialize][checkKnownFields]”,”message”:”Found 1 valid index patterns for Wazuh alerts”}
{“date”:”2018-11-29T14:41:48.999Z”,”level”:”info”,”location”:”[initialize][checkKnownFields]”,”message”:”Default index pattern found”}
{“date”:”2018-11-29T14:41:48.999Z”,”level”:”info”,”location”:”[initialize][checkKnownFields]”,”message”:”Refreshing known fields for \”index-pattern:wazuh-alerts-3.x-*\””}
{“date”:”2018-11-29T14:41:49.092Z”,”level”:”info”,”location”:”[initialize][checkKnownFields]”,”message”:”App ready to be used.”}
{“date”:”2018-11-29T14:41:49.181Z”,”level”:”info”,”location”:”[initialize][checkAPIEntriesExtensions]”,”message”:”Checking extensions consistency for all API entries”}
{“date”:”2018-11-29T14:41:49.188Z”,”level”:”info”,”location”:”[initialize][checkAPIEntriesExtensions]”,”message”:”Successfully updated API entry extensions with ID: 1535484412304″}
{“date”:”2018-11-29T14:41:49.266Z”,”level”:”info”,”location”:”[monitoring][init]”,”message”:”Creating today index…”}
{“date”:”2018-11-29T14:41:49.295Z”,”level”:”info”,”location”:”[monitoring][init]”,”message”:”Checking if wazuh-monitoring pattern exists…”}
{“date”:”2018-11-29T14:41:49.314Z”,”level”:”info”,”location”:”[monitoring][init]”,”message”:”Updating known fields for wazuh-monitoring pattern…”}
{“date”:”2018-11-29T14:41:49.320Z”,”level”:”info”,”location”:”[monitoring][init]”,”message”:”Skipping wazuh-monitoring pattern creation. Already exists.”}
{“date”:”2018-11-30T00:00:01.567Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
{“date”:”2018-12-01T00:00:02.368Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
{“date”:”2018-12-02T00:00:01.297Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
{“date”:”2018-12-03T00:00:02.052Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
{“date”:”2018-12-04T00:00:01.602Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
{“date”:”2018-12-05T00:00:01.886Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
{“date”:”2018-12-06T00:00:02.870Z”,”level”:”info”,”location”:”[monitoring][createIndex]”,”message”:”Successfully created today index.”}
5. Setup DiskSpaceWatch Cron:
-
- I was getting annoyed with having to deal with the diskspace issues which leads to loss of logs and therefore setup a little bash script called “/usr/bin/diskspacewatch”The script runs as root cron every 30 mins, to get to the cron type ‘crontab -e’
#!/bin/sh
df -h | grep –vE ‘^Filesystem|tmpfs|cdrom‘ | awk‘{ print $5 ” ” $1 }’ | while read output;
do
echo $output
usep=$(echo $output | awk‘{ print $1}’ | cut -d’%’ -f1 )
partition=$(echo $output | awk‘{ print $2 }’ )
if [ $usep –ge75 ]; then
echo “Running out of space!! on wazuh production server. Add space or wazuh will go into read only mode. \”$partition ($usep%)\” on $(hostname) as on $(date)” |
mail -s “Alert: Almost out of disk space, add diskspace to wazuhprod server. $usep%” nick@nicktailor.com
fi
done
-
- If any of the partitions reach 75 percent it will send out an email alert to nick@nicktailor.com
- This is to help avoid log loss from wazuh going into read only mode because of diskspace.
-
- Upgrading wazuh agents to 3.7
Note: Lastly They say in wazuh documentation that the agent is backwards compatible however this is not true in my opinion. Reason being features stop working and now require you to update all the agents. This is not a simple as simply updating the agent….
-
- If you attempt to update the agent simply by yum or apt. It will result in the agent loosing the manager ip and key created.
- This particular piece of the upgrade is something that you should test in a test environment by cloning your entire system to a dev one and running simulations. I learned this the hard way and how to be inventive to get it working.
-
- There is an agent_upgrade tool they provide which is supposed to download the new agent, install, and recopy the manager ip and key to the agent all in one go
-
- List out the agents that need to be upgraded
- /var/ossec/bin/agent_upgrade -l
Example.
Example.
waz01 ~]# /var/ossec/bin/agent_upgrade -l
ID Name Version
003 centosnewtemp Wazuh v3.6.0
165 test1 Wazuh v3.6.1
192 test2 Wazuh v3.6.1
271 test3 Wazuh v3.3.1
277 test4 Wazuh v3.3.1
280 test5 Wazuh v3.3.1
306 test6 Wazuh v3.3.1
313 test6 Wazuh v3.3.1
-
- Manual update of agent(Successful)
# /var/ossec/bin/agent_upgrade -d -a 003
Manager version: v3.7.0
Agent version: v3.3.1
Agent new version: v3.7.0
WPK file already downloaded: /var/ossec/var/upgrade/wazuh_agent_v3.7.0_windows.wpk – SHA1SUM: 79678fd4ab800879aacd4451a64e799c62688b64
Upgrade PKG: wazuh_agent_v3.7.0_windows.wpk (2108 KB)
MSG SENT: 271 com open wb wazuh_agent_v3.7.0_windows.wpk
RESPONSE: ok
MSG SENT: 271 com lock_restart -1
RESPONSE: ok
Chunk size: 512 bytes
Sending: /var/ossec/var/upgrade/wazuh_agent_v3.7.0_windows.wpk
MSG SENT: 271 com close wazuh_agent_v3.7.0_windows.wpk
RESPONSE: ok
MSG SENT: 271 com sha1 wazuh_agent_v3.7.0_windows.wpk
RESPONSE: ok 79678fd4ab800879aacd4451a64e799c62688b64
WPK file sent
MSG SENT: 271 com upgrade wazuh_agent_v3.7.0_windows.wpk upgrade.bat
RESPONSE: ok 0
Upgrade procedure started
MSG SENT: 271 com upgrade_result
RESPONSE: err Maximum attempts exceeded
MSG SENT: 271 com upgrade_result
RESPONSE: err Cannot read upgrade_result file.
MSG SENT: 271 com upgrade_result
RESPONSE: ok 0
Agent upgraded successfully
-
- Using the list provided by agent_upgrade you can copy the agent id’s to a txt file like
- vi agentupgrade.txt
003
165
192
271
Etc…
-
- You can then use a for loop like so to cycle through the list
-
- for name in `cat agentupgrade.txt`; do /var/ossec/bin/agent_upgrade -a $name; echo $name; done
Notes: Exiting the script once its running as it may cause issues as I didn’t put in any error fail to exit obviously.. The other issue I did notice that windows 2016 and windows 7 machines had issues updating the agent I saw the following errors as indicated below. This would update the agent, and then timeout without reinputting the manager ip and key. I had to manually update the failed machines as Wazuh was unable to provide me with answer as to why it was failing. I was able to replicate the issue on 50 machines. So in short if your going to upgrade and have 1000 machines. I highly recommend doing lots of simulations before you upgrade as this is one the most important parts of the upgrade. If they fail to mention in their documentation.
Errors:
# /var/ossec/bin/agent_upgrade -d -a 298
Manager version: v3.7.0
Agent version: v3.3.1
Agent new version: v3.7.0
WPK file already downloaded: /var/ossec/var/upgrade/wazuh_agent_v3.7.0_windows.wpk – SHA1SUM: 79678fd4ab800879aacd4451a64e799c62688b64
Upgrade PKG: wazuh_agent_v3.7.0_windows.wpk (2108 KB)
MSG SENT: 298 com open wb wazuh_agent_v3.7.0_windows.wpk
RESPONSE: err Maximum attempts exceeded
MSG SENT: 298 com open wb wazuh_agent_v3.7.0_windows.wpk
RESPONSE: err Maximum attempts exceeded
MSG SENT: 298 com open wb wazuh_agent_v3.7.0_windows.wpk
RESPONSE: err Maximum attempts exceeded
MSG SENT: 298 com open wb wazuh_agent_v3.7.0_windows.wpk
RESPONSE: err Maximum attempts exceeded
MSG SENT: 298 com open wb wazuh_agent_v3.7.0_windows.wpk
RESPONSE: err Maximum attempts exceeded
MSG SENT: 298 com open wb wazuh_agent_v3.7.0_windows.wpk
RESPONSE: err Maximum attempts exceeded
MSG SENT: 298 com open wb wazuh_agent_v3.7.0_windows.wpk
RESPONSE: err Maximum attempts exceeded
MSG SENT: 298 com open wb wazuh_agent_v3.7.0_windows.wpk
RESPONSE: err Maximum attempts exceeded
MSG SENT: 298 com open wb wazuh_agent_v3.7.0_windows.wpk
RESPONSE: err Maximum attempts exceeded
MSG SENT: 298 com open wb wazuh_agent_v3.7.0_windows.wpk
RESPONSE: err Maximum attempts exceeded
MSG SENT: 298 com open wb wazuh_agent_v3.7.0_windows.wpk
RESPONSE: err Maximum attempts exceeded
Error 1715: Error sending WPK file: Maximum attempts exceeded
Traceback (most recent call last):
File “/var/ossec/bin/agent_upgrade”, line 165, in <module>
main()
File “/var/ossec/bin/agent_upgrade”, line 119, in main
rl_timeout=-1 if args.timeout == None else args.timeout, use_http=use_http)
File “/var/ossec/bin/../framework/wazuh/agent.py”, line 2206, in upgrade
show_progress=show_progress, chunk_size=chunk_size, rl_timeout=rl_timeout, use_http=use_http)
File “/var/ossec/bin/../framework/wazuh/agent.py”, line 2102, in _send_wpk_file
raise WazuhException(1715, data.replace(“err “,””))
wazuh.exception.WazuhException: Error 1715 – Error sending WPK file: Maximum attempts exceeded