Sims Net Is Slow
2019-9-4 08:0:0 Author: fyr.io(查看原文) 阅读量:2 收藏

Sims.Net is slow

Posted on

I should preface this technical post by saying that I am in no way a database dude. I understand what they are and how they work at a basic level and sure, I've written some basic SQL queries in PHP or queried some stuff in MSSQL but as a typical “jack-of-all-trades” type I am no expert by any stretch of the imagination.

This is also relevant for any database, not just SIMS.Net.

What is SIMS?

Slow.

For those of you who don't know, SIMS(.Net) is a Capita application used by a whole bunch of schools to record data about their staff, students and their parents. It's a school MIS.

It's quite renown as being very slow. There are many posts on the edugeek.net forum complaining about this, and there are many suggested ways of fixing it. Okay, that's not entirely true. Lots of these fixes may grant you a slight boost, but the backend of SIMS was written a very long time ago (decades) and at the end of the day it's just a slow bit of software. That said, there are some things that can be done to help, and this post outlines one of them.

The Database File

SIMS is a Windows client-server application. The server side runs from a single database, though there are some additional features which have their own databases.

Typically, the schools IT department will not install the SIMS server. It's quite finicky and likes things set up in a very particular way, and is often handled by a third party organisation.

The client application uses a shared network drive for some data storage. Typically, at least in every school I have seen, this is set up on the same drive on the server as the database is stored in. This is generally bad practise when it comes to databases - you should put your database file on a disk/drive that has nothing else on it. No OS, no file storage, no nuthin'!

There are a couple of reasons for this. The most obvious is that if a file on that same disk is being written to or read from, the database isn't being used. Your application must wait its turn before the database can be queried. Schools don't really care about this as the slowdown is not noticeable by end users given the volume of non database data being read or written, but on systems where every mili- or microsecond counts this can be a big bottleneck depending on the frequency of use the other files on the disk see.

The second reason is related to the first, and it has to do with how the database file itself is initially set up. You can define the size of a database when you create it. You should really set it to be bigger than the expected end size of the database when it's time to migrate it to a new system or replace it. Typically, if you know that in 3 years you're going to migrate away from the system and it will have approx 10GB of data in the database, you will probably want to add 20-50% extra on top of this when you first set up the database to allow the data to grow into the file. So you go make a 15GB database and can sit happy knowing that you'll likely never touch the sides.

You can set the database up to dynamically grow once you've filled it up, too. This is called “Autogrowth”. You start with a, say, 20MB database, but this quickly fills up. MSSQL is configured to allow this to grow, so increases the file size of the database in chunks. These chunks can be consistently sized (add 1MB each time) or a percentage of the current size (eg: grow by 10% current filesize. 100MB database grows into a 110MB database, which then grows into a 122MB database, and so on)

The default growth size for a database file to grow by is 1MB. This means that if you have a heavily used database with loads of data going in often, MSSQL will need to constantly grow the database by 1MB each time. This is obviously going to add some overhead to operations, however there's another side effect to this when you have other files being used on the same drive as the database.

If you have a 50MB database, then write some files to the drive, then add more data to the database such that the database needs to increase in size, youre going to end up with 50MB of database data on the disk in a chunk, a bunch of files next to it, then next to that another chunk of database file data. You see a single 51MB file, but at a disk level there's one 50MB chunk, random file data, then another 1MB chunk. This is called fragmentation, and it means that for the hard drive to pull data out of the database it needs to move a little needle (if you're using old spinning disks, like us. What about SSDs? Glad you asked.) a greater distance and more often to get at the data you want. This slows things down.

What the Frag!?

Where I work, I recently found that the database had been set up with unlimited growth in 1MB increments. We went from a near-zero sized database at initialisation to what is currently a 26GB file. Each time we added data that pushed the database file to full, MSSQL would only ever add 1MB to the size of the file.

This, combined with the use of a network drive which has been configured on the same disk, has given us a horribly fragmented database!

There's a SysInternals tool called contig.exe which allows you to query an individual file or filesystem and show you how fragmented it is.

Following this guide I queried the database to see how fragmented it was. Keep in mind, here, that the author of that guide made a 3.5GB test database and purposefully fragemented it 2000 times to show a “heavily fragmented” example.

Here are the results from my test.

PS C:\temp\contig> .\Contig.exe -a "E:\MS SQL SERVER 2012\MSSQL11.SIMS2012\MSSQL\DATA\SIMS.mdf"

Contig v1.8 - Contig
Copyright (C) 2001-2016 Mark Russinovich
Sysinternals

E:\MS SQL SERVER 2012\MSSQL11.SIMS2012\MSSQL\DATA\SIMS.mdf is in 102951 fragments

Summary:
     Number of files processed:      1
     Number unsuccessfully procesed: 0
     Average fragmentation       : 102951 frags/file

Yeah. 102951 fragments for a single file. That's insane. Every time the database is queried it likely needs to navigate around the disks dozens of times to get all the relevant data, slowing things down considerably.

Fixing Fragmentation

We can use the contig.exe tool to fix this. It requires that the database is offline so I've had to wait until this weekend to do this.

I took the database offline (via the MSSQL Server Management Studio GUI) and attempted to defrag the database file.

PS E:\MS SQL SERVER 2012\MSSQL11.SIMS2012\MSSQL\DATA> C:\temp\contig\Contig.exe .\SIMS.mdf

Contig v1.8 - Contig
Copyright (C) 2001-2016 Mark Russinovich
Sysinternals


Summary:
     Number of files processed:      1
     Number of files defragmented:   1
     Number unsuccessfully procesed: 0
     Average fragmentation before: 102951 frags/file
     Average fragmentation after : 3 frags/file

After waiting anxiously for 40 minutes the results of the operation came through. We went from a database split into 102951 fragments to a database split into just 3.

I revived the database by bringing it back online, verified it was working, then carried on with my weekend. All in all, it only took me about an hour and a half on a Saturday to sort this out.

Results

Before I did the work over the weekend I took some rudimentary speed tests of the SIMS application to determine if the change actually had any effect.

I perfomed two tests three times. They wouldn't stand up in a court of law but they're good enough for my purposes.

I timed how long it would take to log in - hitting 'Enter' on the username/password window to getting the logged in GUI loaded - and I also ran a report on our on-role students containing their forename, surname, year & registration group code. A simple report.

Both of these tests were performed at approx 11am on a weekday. Nothing else was going on at the time (no large reports) but SIMS was actively being used by a few dozen staff. The results are in seconds.

Before defrag - Friday

Logging In

  • 16.7S
  • 7.1
  • 9.9

Report

  • 18.6
  • 15.7
  • 18.7

After defrag - Monday

Logging in

  • 10.6
  • 5.7
  • 5.8

Report

  • 14.1
  • 11.9
  • 14.3

As you can see, after the defrag has likely (assuming no unknown other reason for the slow results initially) shaved off a number of seconds from each run. The first login of the day is always the slowest as some stuff gets cached on your machine, but even that was sped up (by over 6 seconds!)

Overall, whilst not conclusive nor scientifically sound, I am happy with this approximate 30% speed boost and have begun looking into our other databases to see if they're suffering from the same fragmentation.

I would advise you take a look at your SIMS database, too. Hell, any database you have. If you're a database guru you likely know about this already and laugh at my realisation but it was genuinely surprising news to me, though obvious in hindsight. I know about file fragmentation, I just didn't think about it in the context of automatic database growth.

Being able to say “I've increased the speed of one of our most critical applications by between 24-40%” sure feels good. Though it should have never been set up this way in the first place…


文章来源: https://fyr.io/post/sims_net_is_slow
如有侵权请联系:admin#unsafe.sh