to get a personalized navigation.
to get a personalized navigation.
Hi
The last few days inventory transfers through /controller/api/v1/inventoryTransfer results in timeout error messages returned from the API gateway.
The request itself does not time out, but the timeout message is being returned.
Can you please look into it?
Example, company id 2693113 , today 19.08.2021 10:16:35 , payload:
{"warehouseId":{"value":"1"},"toWarehouseId":{"value":"1"},"transferLines":[{"toLocationId":{"value":"GA-16-05"},"operation": "Insert","inventoryNumber":{"value":"23420"},"locationId":{"value":"VM-00-00"},"quantity":{"value": 11},"uom":{"value":"STK"}}],"hold":{"value":false},"externalReference":{"value":""},"description":{"value":""}}
Hey Magnus
Thanks again for the debug session today, As promised i just bump this tickes as it seems like this issue is still ongoing
Some timing on responce time, our client are running 20 warehouses doing 600-700 trsafares weekly
for one day it will take aroung 30-40 sec to retrive data from this endpoint
https://integration.visma.net/API/controller/api/v1/inventoryTransfer?status=Balanced&lastModifiedDa... 08:00:27.621&lastModifiedDateTimeCondition=%3E
If i draw for a longer period lets sat 14 days - 30 dags
the request time ends up around 1min - 1.5min (this is oretty consistent)
It seems like there is still somthing that can be done for performence in this endpoint 🙂
Hello.
You now have the possibility to filter on warehouse on the inventorytransfer endpoint:
Release notes 9.41
This should help on getting more in-point data in your use case.
We are also working on improving this endpoint on a general level and the improvement will come in a couple of weeks.
Hello Magnus,
Last night we have set our import time of Visma that started at 6:30 back to 1:20 midnight (CET), and now we got a timeout again. It looks like the extra API server performance is not enough at this time of the night.
We got a 504 response at 01:56:44 and the import stopped.
This is our API log that started at 1:20
09/14/2021, 01:55:45 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/customerinvoice?pageSize=1000&pageNumber=2 200 200 619 B 3.26 MB 8.92 s 8.89 s 195.169.84.220
09/14/2021, 01:55:54 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/customerinvoice?pageSize=1000&pageNumber=3 200 200 619 B 3.26 MB 8.61 s 8.59 s 195.169.84.220
09/14/2021, 01:56:03 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/customerinvoice?pageSize=1000&pageNumber=4 200 200 619 B 3.26 MB 8.74 s 8.72 s 195.169.84.220
09/14/2021, 01:56:12 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/customerinvoice?pageSize=1000&pageNumber=5 200 200 619 B 3.26 MB 12.61 s 12.58 s 195.169.84.220
09/14/2021, 01:56:24 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/customerinvoice?pageSize=1000&pageNumber=6 200 200 619 B 3.69 MB 12.06 s 12.04 s 195.169.84.220
09/14/2021, 01:56:37 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/customerinvoice?pageSize=1000&pageNumber=7 200 200 619 B 2.47 MB 7.46 s 7.44 s 195.169.84.220
09/14/2021, 01:56:44 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/customerinvoice?pageSize=1000&pageNumber=8 504 504 619 B 401 B 10.17 s 10.17 s 195.169.84.220
We restarted the import at 6:30. This import went fine and continued till the end.
/2021, 07:06:02 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/customerinvoice?pageSize=1000&pageNumber=5 200 200 619 B 3.26 MB 14.21 s 14.19 s None 195.169.84.220
09/14/2021, 07:06:17 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/customerinvoice?pageSize=1000&pageNumber=6 200 200 619 B 3.69 MB 9.37 s 9.34 s None 195.169.84.220
09/14/2021, 07:06:26 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/customerinvoice?pageSize=1000&pageNumber=7 200 200 619 B 2.47 MB 7.54 s 7.52 s None 195.169.84.220
09/14/2021, 07:06:34 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/customerinvoice?pageSize=1000&pageNumber=8 200 200 619 B 1.03 kB 1.26 s 1.26 s None 195.169.84.220
09/14/2021, 07:06:35 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/project?pageSize=1000&pageNumber=1 200 200 611 B 332.56 kB 9.22 s 9.22 s None 195.169.84.220
09/14/2021, 07:06:44 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/project?pageSize=1000&pageNumber=2 200 200 611 B 1.03 kB 602.04 ms 601.48 ms None 195.169.84.220
09/14/2021, 07:06:45 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/projectbudget?pageSize=1000&pageNumber=1 200 200 617 B 464.43 kB 1.05 s 1.04 s None 195.169.84.220
09/14/2021, 07:06:46 AM GET https://api.hsleiden.nl/provider/visma/visma_stichting_hsl/v1/projectbudget?pageSize=1000&pageNumber=2 200 200 617 B 1.03 kB 437.78 ms 437.28 ms None 195.169.84.220
Can you see any performance troubles at around 1:56 last night?
Kind regards,
Frank Schimmel
Hi,
We'll look into it and see if we can see any issues.
Thank you for letting us know.
Hello Magnus,
Last night we have had another 'Gateway timeout' 504 error at 1:52:50 (last entry in this table):
09/17/2021, 01:52:29 AM | GET | 200 | 200 | 639 B | 990.18 kB | 3.2 s | 3.19 s | None | 195.169.84.220 | West Europe | 8b24b6ce-1351-4695-b410-5fe327e5ba51 | |
09/17/2021, 01:52:32 AM | GET | 200 | 200 | 639 B | 999.97 kB | 3.32 s | 3.31 s | None | 195.169.84.220 | West Europe | 287027eb-ffda-41a0-b8b2-2fe1f288be7d | |
09/17/2021, 01:52:36 AM | GET | 200 | 200 | 639 B | 986.91 kB | 3.29 s | 3.28 s | None | 195.169.84.220 | West Europe | aa5d823d-308f-43c4-8219-042edd90477b | |
09/17/2021, 01:52:39 AM | GET | 200 | 200 | 639 B | 973.74 kB | 3.44 s | 3.42 s | None | 195.169.84.220 | West Europe | 681ca658-ba30-4dfe-b1ce-73dcece1f290 | |
09/17/2021, 01:52:42 AM | GET | 200 | 200 | 639 B | 983.05 kB | 3.79 s | 3.78 s | None | 195.169.84.220 | West Europe | 517b273f-1f79-4d40-ab13-8dac8bde34cc | |
09/17/2021, 01:52:46 AM | GET | 200 | 200 | 639 B | 998.55 kB | 4.02 s | 4.01 s | None | 195.169.84.220 | West Europe | da7a4bd1-7c6a-4c95-8360-821e8d48a24f | |
09/17/2021, 01:52:50 AM | GET | 504 | 504 | 639 B | 401 B |
Hi,
If it's only an occasional there should not be an issue in general.
How long did it take before timing out? Did it work when trying again?
As Frank, we did get 504's at about the same time last night.
I addition there were some new 504 right now at 12:27
Ok, thank you, we'll inform the team.
@andreasaulin @schimmel-hsleiden Is it only happening on specific companies or in general? Could you please send us the company ID's.
We are not seeing anything out of the ordinary, average response time is normal.
Hi Magnus,
Our import went well for the past two weeks, but last monday (4th of october) we had another 'Timeout' and the import stopped at 01:26:08. Did you have any other troubles that night? Can you see if there is any cause. Is there too much traffic on the server at this time?
Timestamp | Method | URL | Response code | Service response code | Request size | Response size | Response time | Service response time | Chace | IP address | Service region |
10/04/2021, 01:25:58 AM | GET | 200 | 200 | 638 B | 1.29 MB | 2.01 s | 1.99 s | None | 195.169.84.220 | West Europe | |
10/04/2021, 01:26:00 AM | GET | 200 | 200 | 638 B | 1.29 MB | 1.99 s | 1.98 s | None | 195.169.84.220 | West Europe | |
10/04/2021, 01:26:02 AM | GET | 200 | 200 | 638 B | 1.29 MB | 3.41 s | 3.39 s | None | 195.169.84.220 | West Europe | |
10/04/2021, 01:26:06 AM | GET | 200 | 200 | 638 B | 1.29 MB | 2.31 s | 2.3 s | None | 195.169.84.220 | West Europe | |
10/04/2021, 01:26:08 AM | GET | 0 | 0 B | 0 B | 1.67 mins | 1.67 mins | None | 195.169.84.220 | West Europe |
Kind regards,
Frank Schimmel
Hi,
We can't see any particular issues, slightly higher average response time but not a problematic increase or any increase in errors.
Hi Magnus,
When we do the same import batch starting at 6:30 we have had no issues or timeouts. When an import starts and we recieve an error, the import stops. We do not do retry's of API calls after one has failed.
Kind regards,
Frank Schimmel
Our Company ID=838682\
Kind regards,
Frank
I don't know if it's related, but the last days we've been experiencing performance loss from the InventorySummary endpoint as well.
Hello people of Visma,
We have had the same problems with the Visma API. We import all the info of Visma into our Information Framework. Problems started on 12th of august with (403) errors (Forbidden) and then from 20/8 we have had operation time outs. Now it all seems fine again. But the question is, did something change on the 12th of august that has caused all this problems and is Visma monitoring time outs to prevent this problems in the future?
Log overview:
| ## | Succeeded | VismaApi | ExtractJSON.dtsx | 26-8-2021 09:15:03 | 26-8-2021 10:11:54 | 3411,403 | 1 | Hand |
|
|
| ## | Failed | VismaApi | ExtractJSON.dtsx | 25-8-2021 23:20:00 | 25-8-2021 23:23:52 | 231,368 | 1 | Sched | The operation has timed out | JournalTransaction 202001 |
| ## | Succeeded | VismaApi | ExtractJSON.dtsx | 24-8-2021 23:20:01 | 25-8-2021 00:45:27 | 5126,278 | 1 | Sched |
|
|
| ## | Succeeded | VismaApi | ExtractJSON.dtsx | 24-8-2021 11:47:36 | 24-8-2021 12:39:08 | 3091,611 | 1 | Hand |
|
|
| ## | Succeeded | VismaApi | ExtractJSON.dtsx | 23-8-2021 11:30:01 | 23-8-2021 12:39:36 | 4175,445 | 1 | Sched |
|
|
| ## | Succeeded | VismaApi | ExtractJSON.dtsx | 23-8-2021 09:05:19 | 23-8-2021 09:57:43 | 3144,755 | 1 | Hand |
|
|
| ## | Failed | VismaApi | ExtractJSON.dtsx | 23-8-2021 01:20:01 | 23-8-2021 01:32:54 | 773,356 | 1 | Sched | The remote server returned an error: (502) Bad Gateway. | JournalTransaction 202006 |
| ## | Succeeded | VismaApi | ExtractJSON.dtsx | 20-8-2021 10:28:41 | 20-8-2021 11:28:12 | 3571,425 | 1 | Hand | ||
| ## | Failed | VismaApi | ExtractJSON.dtsx | 20-8-2021 01:20:01 | 20-8-2021 01:48:09 | 1687,998 | 1 | Sched | The operation has timed out | JournalTransaction 202011 |
| ## | Succeeded | VismaApi | ExtractJSON.dtsx | 19-8-2021 10:29:10 | 19-8-2021 11:40:11 | 4261,746 | 1 | Hand | ||
| ## | Failed | VismaApi | ExtractJSON.dtsx | 19-8-2021 01:20:01 | 19-8-2021 01:20:05 | 4,125 | 1 | Sched | The remote server returned an error: (403) Forbidden. | JournalTransaction 202000 |
| ## | Failed | VismaApi | ExtractJSON.dtsx | 18-8-2021 01:20:01 | 18-8-2021 01:20:04 | 3,626 | 1 | Sched | The remote server returned an error: (403) Forbidden. | JournalTransaction 202000 |
| ## | Failed | VismaApi | ExtractJSON.dtsx | 17-8-2021 01:20:01 | 17-8-2021 01:20:05 | 4 | 1 | Sched | The remote server returned an error: (403) Forbidden. | JournalTransaction 202000 |
| ## | Failed | VismaApi | ExtractJSON.dtsx | 16-8-2021 01:20:00 | 16-8-2021 01:20:04 | 3,906 | 1 | Sched | The remote server returned an error: (403) Forbidden. | JournalTransaction 202000 |
| ## | Failed | VismaApi | ExtractJSON.dtsx | 13-8-2021 01:20:01 | 13-8-2021 02:01:02 | 2461,044 | 1 | Sched | The remote server returned an error: (403) Forbidden. | Supplier |
| ## | Failed | VismaApi | ExtractJSON.dtsx | 12-8-2021 01:20:01 | 12-8-2021 02:03:54 | 2633,279 | 1 | Sched | The remote server returned an error: (500) Internal Server Error. | Ledger |
| ## | Succeeded | VismaApi | ExtractJSON.dtsx | 11-8-2021 01:20:01 | 11-8-2021 02:09:13 | 2951,615 | 1 | Sched |
Kind regards,
Frank Schimmel
Hi,
We have since made changes to our infrastructure to be able to better handle the API traffic, we have also put in place more measurements to make sure that we will have a better chance of catching issues where these timeouts are causing an issue.
We are currently still monitoring this and investigating what improvements we can implement to eliminate the risk of this occurring again.
Please let us know if you see these types of issues again.
Thank you.
Hi Magnus,
Thanks you for the answer and good that you are monitoring.
Kind regards,
Frank
Hi,
We have the same issue and it's really critical. We see this in several endpoints, Inventory, SalesPrice, Customer, SalesOrder etc.
Hope this will be solved as soon as possible. Thanks.
Log from system errors since last
03:59 31.08 5102 03:20 31.08 5102 og timeout 23:18 30.08 5102 22:41 30.08 5102 22:23 30.08 5102 21:06 30.08 5102 20:38 30.08 5102 19:47 30.08 5102 19:30 30.08 5102 16:20 30.08 5520 og {"message":"VismaId: 97fe0349-e41f-411e-a5fd-b01c1abeaf54. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)"} og 502 bad gateway.
We also have the issue with not getting any response when we are posting, missing 201 request. Reported earlier. this has now been a problem for 2-3 weeks.
Ex errors:
/controller/api/v1/Inventory?attributes=%7B%22BUTIKK3%22%3A1%7D&availabilityLastModifiedDateTimeCondition=%3E&availabilityLastModifiedDateTime=2021-08-31 01:49:46&pageNumber=1 {"ExceptionType":"IPPException","ExceptionMessage":"","ExceptionFaultCode":"5102","ExceptionMessageID":"5102_a9279539-a9be-4059-856c-c191c838d45c","ExceptionDetails":""}
/controller/api/v1/customer?status=Active&corporateId=986503781&pageNumber=1 {"ExceptionType":"IPPException","ExceptionMessage":"","ExceptionFaultCode":"5102","ExceptionMessageID":"5102_69533f9a-384c-4351-b8a8-4caae57365df","ExceptionDetails":""}
Hi,
We have increased the capacity of the servers handling the API traffic and the response time should now be far less and no longer be an issue.
We are still investigating the root cause of the increase in response time and will still be monitoring this.
Please let us know if you are still having the same issue with:
Thank you!
Hi.
Good neews. We have got no errors related to this so far today. We will continue monitoring.
Brgds Steinar
Copyright © 2022 Visma.com. All rights reserved.