26 minute read
In this insightful episode of the Legal Helm podcast, Helm360’s Customer Service Manager, Kiran Gill, and Executive VP, Bim Dave, discuss Elite 3E performance testing and why it’s a must-do. Bim also shares DIY tips on how firms can do some basic troubleshooting on their own to diagnose underperforming 3E systems.
Bim Dave is Helm360’s Executive Vice President. With 15+ years in the legal industry, his keen understanding of how law firms and lawyers use technology has propelled Helm360 to the industry’s forefront. A technical expert with a penchant for developing solutions that improve business systems and user experience, Bim has a knack for bringing high quality IT architects and developers together to create innovative, useable solutions to the legal arena.
Kiran Gill is Kiran Gill is Helm360’s Customer Success Manager. With 10+ years in the legal industry, she has an innate understanding of law firm operations and legal IT, which she uses to help our global clients find the best products and/or services to meet their needs. Her background includes case management, customer service, client relations, and sales with an emphasis on communications and problem-solving. Kiran is passionate about customer service and uses a transparent, consultative approach when working with clients.
Kiran: We’re really excited to be here with Bim Dave to talk about how to get optimal performance out of your Elite 3E system. I’m Kirin Gill, I’m one of the customer success managers at Helm 360, based out of the UK office. Bim, thanks for joining us today.
Bim: Hi Kiran, it’s nice to talk to you today. For those of you who don’t know me, I’m Bim, I’m the EVP at Helm 360. My background is largely working in the legal arena. Today we’re going to talk about 3E performance and the metrics you can use and monitor to get the best out of the system.
Kiran: So when it comes to 3E performance, what do we need to consider?
Bim: Whenever I’m thinking about 3E performance, there’s four key areas that spring to mind. The obvious one is your infrastructure. By that, I mean the servers, the work stations, the browser, all of the elements that form your hardware layer that are going to house the 3E infrastructure. Those components can have a bearing on performance.
Then you’ve got the core application layer, which is the 3E product and anything else on top of it.
The third part to consider is the customization layer, which is not just focused around 3E customizations specifically, but all of the customizations associated with it.
This flows into integrations and downstream and upstream systems that 3E might be talking to.
Those four key areas: infrastructure, core, the product, and the logic that it uses, then customizations and their integrations.
When you think about customizations, it’s focused around two key parts. The data, because when you think about 3E and implementing a product like 3E, there are variables. So, you’ve got your core product, which is doing what it does in terms of core business logic, then there’s the business logic that you want to apply as part of your implementation. You might have different types of behavior, different types of checks happening as you’re navigating through a workflow, for example, which are going to be unique to you and to your firm and the decisions that you make. That business logic layer is specific to you and has not been tested in a lab. You can’t guarantee that that’s not going to have an impact on performance until you’ve tested it.
The other piece is also data. Not just from a volume perspective, which can influence performance, but how you interact with the data. If you’re a low value high volume business, you have lots and lots of small transactions happening. This can have an impact on the way that the system performs. As is the opposite where you’ve got very high value, small number of transactions which impacts in different areas of 3E. So it’s really important to consider those factors when you’re thinking about 3E performance generally.
Kiran: Okay, understood. You mentioned infrastructure earlier as one of the points to consider. As a firm, if I followed the PSR, I bought even better hardware. why do I need to still do a performance test or should I not be covered at that point?
Bim: That’s a great question. So, the PSR, or product system requirements, really define the ideal hardware that you need to run the 3E System. Typically, when you’re scoping out the hardware that you eventually implement, there will be room for growth. There’ll be some consideration about how your database will grow over time, how the number of users might grow; you’re future-proofing the system as much as possible.
If we go back to the first question you asked around the different aspects of 3E formats that need to be considered, ultimately, every customer’s product implementation is going to be slightly different. So from an infrastructure and hardware perspective, the decisions you make about which load balance you choose, whether it’s Citrix Netscape or Coyote load balancer or something else, there’s going to be different versions of firmware that will be deployed. Your network layer is going to be slightly different. The way your end-users interact with 3E across the WAN might be different. Over a VPN might be different. The group policies you apply to your PCs, your underlying storage infrastructure vendors, tools and utilities will all be different. That will have an influence.
And then there’s the virtualization side of things. Whether you’re a virtual shop or a physical shop, all of those things play a part right? It would be impossible to test every single variation of hardware and configuration a firm puts in place to validate the performance will be optimum in the environment. So really, the PSR is there as a guide to define what is best practice in terms of scoping out a system. But until you add your specifics into the mix, you really don’t know how well it’s going to perform and how each of those elements will perform or need to be tuned to be able to give the best and optimal performance.
And as I mentioned earlier, you could have the best hardware in the world, but if your business logic or the customizations are poorly written or don’t consider performance, then you may not know the impact of that until you have everybody in the system. That’s where you add an unnecessary risk.
If you think about that ahead of time as part of the implementation, do some automated testing to see what the impact of of the load with your configuration, with your data, with your customizations in place, really has on the environment, that gives you an opportunity to tune, optimize, and, once you go live, have real confidence the system is going to hold up. That’s really important.
Kiran: When we focus on the 3E aspect, then when you are in the process of implementing it, but you don’t have very many customizations, do you still need to worry about performance at that point? If you know you’re going to go with the blueprint of the vanilla system, how does that fit into it?
Bim: If you think about it in terms of risk levels, the level of risk goes down the less complicated you make it. Having little customization is a good thing for an implementation because you limit the exposure that custom layer typically gets added to business logic. If you remove that element, then that definitely de-risks it a little bit. It still comes back to the same thing in that your data is to your data. The way you configure 3E will have an impact in terms of how well it performs.
You can still make lots of decisions without customization and more configuration that can also influence how 3E performs. Little things like how many rows you configure 3E return as a maximum default in terms of its reporting engine. You have the ability as part of just a standard product to set that at firm level. You can set it to whatever you want beyond the default. A simple change like that could change the amount of load that’s being executed on the system quite dramatically. It could be a valid business reason to implement that change, but you won’t know until all of the systems and processes are being used concurrently to really understand the impact from a work flow perspective. A lot of firms we work with leverage the great functionality that’s available out of the box, so technically it’s not a customization. It’s really a 3E configuration.
A workflow is designed to have multiple layers, which differ from firm to firm and have different routine criteria. It’s a business logic you’re defining within the user experience to get through the work flow. Lots of other things can happen as a result, like notifications and other actions that can be fired off as part of the journey. Depending on how you implement those and how complex you make those workflows, you’re still technically a zero-customization firm. You have lots of layers of logic applied, which until you have 200-300 people in your system doing the same thing at the same time, you’re not going to know what the real impact on performance.
My take is there’s still validity in terms of doing as much testing in advance when it comes to a more vanilla system, because you’re testing the configuration aspects. You’re also testing the data aspects and, most important. the variety of underlying hardware that’s going to be in place, which does need to be tuned or tailored to get the best performance.
Kiran: On the flip side, if I have lots of customizations, of course there’s more complexity in running a performance test. Does that mean I’ve got to spend a lot more money and time to test every customization?
Bim: That’s a great question, Kiran. The short answer is no. And let me explain why.
The types of customizations that are typically happening vary in terms of performance impact. If you’ve got a customization, which is basically simple changes to a form or simple business logic that’s been applied to a form to do some level validation. Typically, what we do as part of a performance testing exercise is determine which customizations are a deviation from what we consider to be a low impact customization. Because realistically speaking, you’re wanting to do a performance test in a way where you get your best bang for buck. To me, this is testing your core business processes that come into play at month end, year-end, when you’ve got the most people on your system, when you’re inputting as many time entries as possible, when you’re getting as many bills out of the door as possible, etc. That’s where the heavy hitting happens from a systems perspective.
Putting that into context from a testing perspective is important. If you focus on the things that are really going to make an impact, if you’ve done some customizations.
We had a customer recently who made a customization to the collections module. They were allowing bulk collection updates to happen from the UI. This kind of scenario gets factored in as part of a performance testing exercise for us. We want want to make sure we’ve considered the impact of somebody bulk updating 100 collection items. What is the net effect of that on the system? Does it cause a problem? Those kind of things are highlighted in the discovery process to minimize the amount of investment needed in the performance testing suite. It gets the client the best bang for buck, the reassurance that the system can stand up well to that scenario, and still provide a good user experience.
Kiran: That’s great. When we think about a project where we’re implementing 3E, or even upgrading, there’s a tight budget. There’s a lot of concern around extra costs around these areas that we may not consider to be a necessity. Obviously, they quite clearly are. It sounds like there’s quite a lot of things to consider in a performance and a lot of steps involved. If I haven’t considered performance testing until now and I’m going live next month, what do I do?
Bim: That’s a very valid scenario and something that we’ve faced a few times with the customers who have left it to late in the game. My recommendation is to be thinking about performance and performance testing at the start of the project so it can be planned accordingly. You want to make sure you’ve got time to assess what needs to be tested from your unique perspective and that it’s a good robust test.
There are ways to validate performance, particularly focusing on IT infrastructure performance, using the standard tooling available. For example, in the scenario where you’ve got a very short time window, my recommendation would be to leverage an out-of-the-box testing library. For example, our suite mimics a typical month-end load using the standard processes in 3E. Part of our standard Automation Suite covers things like bulk time entries all the way through to generating bills, the print outputs, going through a template-defined templates. You still need time to set up monitoring the system and automating the load, but from an overall timeline perspective, it’s much shorter. We’re using a tried and tested library of automated tests that just need to be in configured for your volume and then executed, environmentally monitored, and recommendations made.
The biggest consideration when you’ve got a short window of opportunity is the testing. If it’s relatively straightforward and it can be done in a short time frame. However, the resolution phase also needs to be considered. After we’ve executed the test and we’re monitoring the system, we look at the performance bottlenecks. Let’s say we find some significant issues in template logic, which causes slowness in the bills being generated and outputted. That requires some developer time to go and resolve. Then the solution needs to go through a change control process. Somebody needs to test the outcome. Ideally you want to repeat the test to make sure that the problem has been solved.
Equally if something’s found on the infrastructure side, your antivirus (AV) is causing performance overhead. Being able to diagnose that can take some time. You’ve got to consider that as part of the resolution phase before you can validate that actually you’ve solved the problem before go-live.
The only consideration I would put on the table is the core testing element. The tried and tested suite is relatively straightforward and involves a short time frame. The outcome of that testing also needs to be considered because it could be a very simple tick-box exercise to say “yes the system performs well.” More likely it’s going be “yes, the system performed, but there are some areas of improvement. These be areas of improvement need to be implemented.” Then it’s really about how quickly that can get through a change control process and how quickly resources can be bought in to fix those problems or fix code. In some cases it could be a core product issue, so get a solution from Thomson Reuters.
All of these things need to be considered.
Kiran: Okay, what you’re saying is, if you can, factor in performance testing at the beginning of a project, brilliant, please do it. But it’s never too late. If you do leave it to the 11th hour, you’re absolutely fine. You can still do something about it.
Bim: Yes, absolutely.
Kiran: Once you have gone live, is it too late? What if you have gone live when you haven’t done a performance test but you notice some slowness in the system. What do you do then? Is it too late? How does that go?
Bim: Yes, it’s too late. Because now your end users are suffering as a result of it, right? And potentially some of your business processes are slow. The biggest impact of a system that’s intermittently slow, is that it impacts end user confidence in the system. What you’ve got to remember is there is user perception and buy-in to consider.
For example, your users are used to the Performa generation process in their old system. They’re so used to the old system that it’s intuitive to them. They can bang out what they need in a few minutes. In the new system, they’re still getting comfortable with the fact they have to navigate the screen slightly differently. It’s not as intuitive to them because they’re not used to it. If on top of that the new system is slow and unresponsive or intermittently hanging up, that’s going to cause a lot of frustration and ultimately impact productivity.
If you are in this situation, it’s really about making sure that you understand where the slowness is coming from. It does become more difficult and painful to troubleshoot because change control becomes a lot harder. You just can’t take elements of the system down to replace, configure, or reconfigured aspects of it. You’re have to be more careful about change.
Monitoring is key. Make sure all of aspects of the system are monitored so that you can hone in on where the slowness is. It allows you to go into that diagnosis phase and really understand and debug where the issue is be it the database level in your core underlying infrastructure or within the 3E application on your WAPI servers. Really hone in on what those problems are. Once you know or have narrowed down the scope of the investigation, then you can produce a fix. That could be a tuning a data base setting or parallelism. It could be a code fix. It could be a service patch that needs to be applied. There are lots of variations in terms of what their fix could look like.
Ultimately, it’s making sure that you’ve got a window of opportunity to apply that fix in a controlled manner, not bypassing any change control processes. Everything needs to be tested before it gets rolled out to your production system. Then continue monitoring. Because once that fix has been deployed, you want to make sure ou are continuously monitoring the system for bottlenecks so you can continue detect other areas of opportunity. The likelihood is once you’ve fixed one thing, you may have other opportunities to implement performance tuning exercises in various areas to get you closer to what you want: a smooth running 3E system.
Kiran: Throughout this conversation, we’ve focused on performance testing, what it means, and how it’s done. When we focus on the users, because ultimately you want to ensure that the users go live on day one as smoothly and happily as possible. How can we be confident that 3E will perform when everyone is on the system? How do you achieve that?
Bim: The key to success there is an automated performance test. The best possible scenario is that you’ve got a test suite built using tried and tested methods. It can exercise the system at the active concurrent use that you are going to have when you go live at peak timings like month-end and year-end. Combine that with your usage so your usage pattern, the types of things you might be doing that are different to another firm or where you’ve got high volume transactions in certain areas. Mimicking those. You use history to dictate how your test suite looks so that volume information, like how many time cards you typically generate, the frequency, etc., gets as close as possible to your reality. Likewise, with integrations for downstream and upstream systems.
Making sure all of those things are in place during your test so you understand what it’s going to look like. Also include any scheduled jobs that could be happening in the background, any reporting solutions that might hit the 3E database, or any third-party systems that might hit the 3E database. By making sure all of those things are in place and active while a controlled performance test is happening, allow you to mimic the load well enough and close enough to what reality might look like – you’re never going to get it perfect – but get it close as possible to reality. If you combine that with the right level of monitoring, you’re monitoring each layer of 3E, your infrastructure, the database, all the variable aspects, etc., then you can have real confidence that when you hit the switch to go live and everybody’s on the system, that it’s going to stand up. You will have highlighted most of the issues prior to going live just by exercising the system in that controlled manner
Kiran: Before we finish up, is there any other performance tips that you can share with us?
Bim: Yes, I do have a few I can share with you.
The first is always consider the impact of network performance. We’ve been talking to customers who complain about performance and that the 3E system is slow. When we dive into the details, we look at it in different ways. Is the application actually slow? There are simple steps you take to diagnose that a little bit further. For example, if you run your browser from your workstation, you’re experiencing a level of performance. We want to assess if you get the same performance in different scenarios. First step: rule out the load balancer adding any overhead. Instead of going via the load balancer URL, go directly to a single WAP URL. See if you get the same kind of performance. Always use a stop watch to measure the timings of key process so you can be consistent in terms of benchmarking and testing.
Then take that one step further. If you see you the same kind of forms hitting the WAPI server directly, test what’s go on to the WAPI server itself. You’re ruling out any connectivity issues between a client workstation and the server itself. Open a browser there. Do the same test and see if you get similar performance. Then you’re ruling out quite a few things: group policies applied to the local workstation, local AV impacts, browser settings that could be controlled by group policy, etc. Those kind of things come into play. You can hone in on where your issue might be.
One of the really useful tools, from a general WAN performance perspective that I find really useful is Google Chrome and the tools that come with it. If you open up 3E and hit the F12 key on your keyboard, you can access
One of the cool things is you can open up one of a process that is taking a long time and use that as your starting point for diagnosis. So, first step: is it the user, the work station communicating with the server, or is it something that’s happening with 3E that’s causing the slowness.
Then there are two things to explore further. First, enable 3E log in so you can go into the users and roles process. Look at the SQL output, look at the calls that are being made from a pure 3E perspective. See how long that’s taking. The time that takes is one aspect of the journey and probably the bulk of the aspect. Then that data needs to be transmitted back and forth between the work station and the server. Once you understand how long the 3E component is taking, grab the data and do its business logic. You have a number in mind versus the latency number that it takes to transmit that data across. You can use the network tab in Google Chrome to get an output of what the traffic transfer looks like between your workstation and the server. Yu can save that output into what’s known as a HAR file, which is basically a HTTP archive.
One of the great things Google has done is they’ve created a tool called Google HAR analyzer, which is available online, free to use. It helps debug these kinds of scenarios. You upload your HAR file and run it through the analyzer. It break down all of the wait times for you so you can see how much of it is waiting for a packet to be received from the server versus your browser waiting for something to happen from a server-side perspective.
I typically look for something with a very high receive time. That usually indicates there’s a little bit of slowness in terms of data coming back from the server. The server’s already done what it needs to do, but there’s a delay in it getting back to you. If you see a high wait time, then typically your browser is waiting for the server to bring something back to you. In that case, I would spend more time looking at the 3E logs and diagnose that a little bit further.
The other big area is around the SQL server itself. The SQL server accounts for 80-90% of the performance tuning exercises that we do for customers. That’s usually where you can get a lot of improvement. The SQL side is really about making sure you understand your work load on a regular basis once you’re live on 3E. This kind of activity is most useful once you’ve gone live. That’s really when you’re going to get the real usage of what the system looks like.
It important to regularly monitor the SQL workload. That can really highlight good opportunities for things like index tuning and other areas that are contributing to bad performance. Things like locking and blocking that could be happening in the background. There are really good SQL DMVs you can use that are basically system views that allow you to look at statistics relating to the queries executed on your SQL server. A couple that spring to mind, your OS wait stats. If you look at DMV, that’s going to give you a good indication as to what your top waits are across the system from the SQL server perspective. It allows you to diagnose those further. On the query side of things, one of the go-to DMVs there is called the DMExecQueryStats.It allows you to see top queries by various different types, like top queries by CPU, disk, memory and go. It you to understand what the impact of those queries are across the system. It’s a really good way to see queries that could be tuned.
For example, if you see one that’s really high on CPU, that could indicate issues with your thresholds for leveraging parallelism. It could be a badly written query that could be tuned. There could be lots of different reasons, but it gives you a starting point for diagnosing them.
SQL Server has some really nice tools and configurations that you can enable that allow you to look for an alert based on thresholds being met for deadlocked processes. For example, if something’s been locked for 20 seconds you can configure SQL server to fire an alert to let you know that something major is happening on the system. So that’s the other big area. SQL is where you should be spending a lot of time doing ongoing monitoring.
The final thing I would mention is anti-virus. Antivirus often is a painful overhead. Depending on how you’ve got it configured, that overhead really contributes to performance in a negative way. It’s really important to make sure that you understand where your AV lives. You’re going to have different types of AV for workstation all the way through to your server side and potentially network as well. Making sure you understand what AV is in place and what is actively being scanned (real-time scanning or on-access scanning). Those are the things to watch out for. They’re the ones that have the biggest impact.,
Make sure you’ve got the right balance of security versus performance. You don’t want to turn AV off, you want it protecting your system, but not interfering with performance. Microsoft has some great best practice guides out there. They cover SQL server configuration for typical AV solutions. These are focused around active scanning for things like log files, database files, and other aspects of the environment.
Then there’s the WAPI servers. From an IS perspective, there are certain folders that should be excluded from real-time scanning to enable best performance. From a FUNC perspective, because some of the stuff that happens within 3E, a lot of the writes we do and the reads we do from a document creation perspective will be happening on the FUNC. Making sure those folders are excluded from real-time scanning and still doing regular scans to make sure there’s no viruses or any Issues there. It’s really the on-access in real-time scanning that what you want to limit as much as possible to ensure the best possible system performance.
Kiran: Well, thank you for those. When I asked if there were performance tips you could share, I didn’t realize there was so much more we could discuss. That was really useful. Thanks for bringing so much knowledge and advice to this session. We’ve really been able to break down some of these preconceptions when it comes to performance testing. Thanks for joining.
Bim: Thank you. Great talking to you.
Interested in learning more about Elite 3E performance testing? Have question on how it can benefit your firm specifically? Contact us! We’re happy to help.