to get a personalized navigation.
to get a personalized navigation.
There are a lot of 5520-errors now. In addition we see "A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 - The wait operation timed out.)" sometimes.
The mitigation was finished yesterday (at least that is what it says on status.visma.com) ... but there is no doubt that something is not as it should be
Status page has been updated. We should be fully operational without hiccups for the API.
https://status.visma.com/incidents/9z0fvnd1x177
-
It would be nice to get some feedback from the ISV's side.
Sorry for the inconvenience.
Not related to slowness, but we still receive 5020-errors and in addition there was a burst of 504 just a couple of minutes before your post
Hello Andrea, have the transactions been stabilized or are you still experiencing those internal server errors ?
New 5520 today:
Completed: /controller/api/v1/shipment/?lastModifiedDateTime=2023-10-30 08:06:30&lastModifiedDateTimeCondition=%3E&pageNumber=2
{"ExceptionType":"IPPException","ExceptionMessage":"","ExceptionFaultCode":"5520","ExceptionMessageID":"5520_9123b2ca-33b0-4a0a-ad28-74865a8cf878","ExceptionDetails":""}
Completed: /controller/api/v1/Inventory?availabilityLastModifiedDateTimeCondition=%3E&availabilityLastModifiedDateTime=2023-10-30 08:06:30&pageNumber=2
{"ExceptionType":"IPPException","ExceptionMessage":"","ExceptionFaultCode":"5520","ExceptionMessageID":"5520_8dc0df8a-bbd5-46d7-a30b-dfcf41035018","ExceptionDetails":""}
Could you send your Company-ID & API Client ID to us that you experience the 5520s with ?
I've sent a few examples to developersupport now
Got timeouts at about 13:00. Last 5520 was at 11:20
Hi! We are still encountering issues communicating with the API after the migration..
We are consistently receiving numerous 5520-errors as well as a new type of error as shown below.
Hello everybody,
Here also a lot of errors on the API gateway especially the last two days and also this afternoon at 13:14. API is still not working well, but status says it's OK.
Message Status ResponseStream ResponseWebHeaderCollection ResponseUri ResponseStatusCode ResponseStatusDescription DC_DateTimeStamp
The remote server returned an error: (502) Bad Gateway. ProtocolError {"ExceptionType":"IPPException","ExceptionMessage":"","ExceptionFaultCode":"5102","ExceptionMessageID":"5102_f7c5e633-60e6-4d13-bc93-b8b69047658d","ExceptionDetails":""} Transfer-Encoding: chunked
Strict-Transport-Security: max-age=31536000; includeSubDomains
ipp-request-id: d6fd7fce-e103-4ce1-809a-737545c12fdd
Request-Context: appId=cid-v1:22f66f91-b042-4660-9803-ec9d7e371cb3
Content-Type: application/json
Date: Wed, 25 Oct 2023 13:14:04 GMT
ETag: "6538b067-8f2"
https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v2/journaltransaction?periodId=202002&pag... 502 Bad Gateway 2023-10-25 15:14:04.961
The remote server returned an error: (500) Internal Server Error. ProtocolError {"ExceptionType":"IPPException","ExceptionMessage":"","ExceptionFaultCode":"5520","ExceptionMessageID":"5520_4f743394-9353-418d-b173-b3b105f24d6a","ExceptionDetails":""} Pragma: no-cache
Transfer-Encoding: chunked
Strict-Transport-Security: max-age=31536000; includeSubDomains,max-age=31536000; includeSubDomains
ipp-request-id: a00d3fb6-8923-48b9-b57c-5f032833d00c
X-Content-Type-Options: application/json
X-Handled-By: Visma-PX.Export/AuthenticationManagerModule
Referrer-Policy: origin-when-cross-origin
Access-Control-Expose-Headers: Request-Context
Content-Security-Policy: frame-ancestors *.visma.net erp096701000039 localhost
Feature-Policy: geolocation 'none'; vr 'none'; payment 'none'; midi 'none'; microphone 'none'; fullscreen 'none'; encrypted-media 'none'; camera 'none'; autoplay 'none';
X-XSS-Protection: 1;mode=block
Request-Context: appId=cid-v1:22f66f91-b042-4660-9803-ec9d7e371cb3
Cache-Control: no-cache,no-cache
Content-Type: application/json
Date: Wed, 25 Oct 2023 13:12:28 GMT
Expires: -1
https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v2/journaltransaction?periodId=202002&pag... 500 Internal Server Error 2023-10-25 15:12:28.956
We also had 13 errors the last three months, but since the last two days we get errors every time.
Any indication when this will be resolved?
Overview:
Message | |
24-10-2023 22:55 | System.Net.WebException: The remote server returned an error: (500) Internal Server Error. |
23-10-2023 22:37 | System.Net.WebException: The remote server returned an error: (500) Internal Server Error. |
6-10-2023 22:44 | System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host |
4-10-2023 00:30 | System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. |
Kind Regards,
Frank Schimmel
Hogeschool Leiden
We are aware of the challenges and troubleshooting the issue. Follow the status here:
https://status.visma.com/incidents/9z0fvnd1x177
As an addition to the previous reported errors we have also started to get null results for calls that previously returned empty arrays or data in responses. Like for calls to controller/api/v1/subaccount that returns null value for Segments, it is also not happening all the time but occasionally and when retried it is not null for the same customer.
During the night there has been occurrences of both 5520 and "The 'CompressedContent' type failed to serialize the response body
for..."-error.
I've logged about 150 of these the last 12 hours.
Examples:
Completed: /controller/api/v1/PurchaseOrder?lastModifiedDateTimeCondition=%3E&lastModifiedDateTime=2023-10-25 03:48:04&orderStatus=Hold&pageNumber=1
{"ExceptionType":"IPPException","ExceptionMessage":"","ExceptionFaultCode":"5520","ExceptionMessageID":"5520_21dd55cf-35b0-440e-b5cb-ce19c4664747","ExceptionDetails":""}
Url: /controller/api/v1/carrier/
Content received:
{"message":"An error has occurred.","exceptionMessage":"The 'CompressedContent' type failed to serialize the response body for content type 'application/json; charset=utf-8'.","exceptionType":"System.InvalidOperationException","stackTrace":null,"innerException":{"message":"An error has occurred.","exceptionMessage":"Timeout performing EVAL (5000ms), next: SET CONFIGCACHE_4b9b5db0-d272-11eb-b60b-0638767d04b5_lock, inst: 10, qu: 0, qs: 6, aw: False, rs: ReadAsync, ws:...
Hello, we're expecting that the performance will be stabilized today. The issue was related to the couple of clusters due to the gateway, therefore only some of the dbs were affected, which should be fixed now. Please monitor the integrations and let us know if it's not getting any better by tomorrow.
Thanks.
A lot of 502 Bad Gateway since 08:59, and it's still failing
What about the Redis-errors? Havent seen those before the move to Azure
{"message":"An error has occurred.","exceptionMessage":"The 'CompressedContent' type failed to serialize the response body for content type 'application/json; charset=utf-8'.","exceptionType":"System.InvalidOperationException","stackTrace":null,"innerException":{"message":"An error has occurred.","exceptionMessage":"Timeout performing EVAL (5000ms), next: EVAL, inst: 25, qu: 0, qs: 14, aw: False, rs: ReadAsync, ws: Idle, in: 0, serverEndpoint: 10.10.6.4:15001, mc: 1/1/0, mgr: 10 of 10 available, clientName: erp09670100003H, PerfCounterHelperkeyHashSlot: 505, IOCP: (Busy=0,Free=1000,Min=8,Max=1000), WORKER: (Busy=27,Free=32740,Min=8,Max=32767), v: 2.1.58.34321 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts)","ex
Hi,
We are currently experiencing some network issues which results in lower performance for our customers. Together with Microsoft we are investigating the issues.
Looks much better now!
Copyright © 2022 Visma.com. All rights reserved.