It can be tricky to identify exactly what is contributing to this error. We say “contributing” as there may be several factors. Salesforce allows only so much time for transactions to run before they time out and throw this error. For synchronous transactions such as a record being created or updated, up to 10 seconds is allowed (though it’s closer to 15 in practice). For asynchronous transactions such as a batch job running, up to 60 seconds are allowed. This includes every piece of logic that runs during a transaction, including both managed package logic as well as your own local customizations.
It’s very important to note that the stack trace in the error message, which identifies where the error was thrown, can be very misleading. This is because for this particular error, all it tells you is which logic was running at the time the threshold was reached. So for instance in a transaction you may have 9.8 seconds of logic running that has nothing to do with Kubaru, but a piece of Kubaru code may then follow, during which time the limit is reached. In this case the stack trace will report a failure within a Kubaru code class, though in reality it would have only been running that particular logic for 0.2 seconds, which is well within an acceptable range.
All of our code has been developed with best practices to optimize for speed and efficiency. In most cases we find this error was not caused by our package code, but rather something that preceded its execution, such as local apex code or process builder / flow logic. Process builder and flow are particularly known to sometimes run inefficiently, and have been observed to run up CPU time extraordinarily in some cases.
A good first step to troubleshoot this error is to set up debug logs so that you can learn more about any transactions that are prone to failure. This will give you insight into which operations are taking the longest time to run. From there you can try the following:
- If you are inserting or updating large numbers of record in bulk, consider reducing the batch size.
- If you find the suspect is a local Apex trigger or class, you will need to look into coding those components more efficiently.
- If the issue is code contained in a managed package, reach out to that package’s support team with your findings.
- If the issue is based on process builder or flow, you can search online to find several forum threads that discuss methods to optimize performance, as this issue is quite common. You may want to consider reworking such logic into workflow rules or Apex code, which tend to be much less harsh on CPU time. You can also upvote the idea to improve process builder / flow performance.
You can always reach out to us at firstname.lastname@example.org for assistance.