“Idiots doing Idiot things” – The Top Eight Worst SQL Queries Ever

For over fifteen years I have worked with a Microsoft SQL Data Warehouse that is over twenty-five years old.  Although it is ancient in this world of huge cloud-based Data Vaults – it churns out an abundant of value to the organisation I work for. 

The problem with this value – it is opened to anyone who wants to access to build their queries off.  This means we get a range of SQL queries running on the database.  We have queries ranging from precise and efficient built queries that uses every trick in optimisation – all the way to “WTF are you trying to do?!” queries.

The second category is the one we have the most problem with.  With limited resources of an OnPrem SQL server; some uses write atrocious queries without a perceived care for other people using the database.  There is nothing more frustrating getting a support call out because of a failed data load; only to find the table is locked by someone running a query over the past six hours.

To counter this problem; I created a python script that checks what is running on the database every fifteen minutes.  If a user has a query running for over half an hour, I get an alert to show what they are running.

This allows our team to:

  1. Work out if someone is running a query that is blocking or taking resources away from our critical processes – resulting in timeouts
  2. With our experience help users optimise poorly running queries so they have a better outcome
  3. To stop any stupid queries from running

Since the queries are logged into a DB – I have four years of history of 14,000 unique queries running.  This gives me a lot of learning experiences of what users are running and what problems they face.

From this learning – this is the Top 10 most horrible queries I see running on our database.

8. The “Discovery” Query

This query come from new and experience users exploring the data of the database.  They will run something like this:

SELECT *
FROM dbo.A_VERY_LARGE_TABLE;

Look at the data, grab what they need and then just leave the query running in the background in their Server Management Studio.  When we contact them an hour later; they act surprise that the query is still running in the background – returning millions upon millions of rows to their client.

This is an easy fix with training.  Using the TOP command to limit the number of rows returned, use SP_HELP to return a schema of an object, provide a sandbox for people to explore in are all relatively simple fixes for this problem

7. The “Application Preview” Query

Our SAS application have a nasty default setting that if a user previews a data in a table; it will try and return the whole table – with the user completely unaware that this is happening. 

Even worse; if the application locks up returning the data; the user kills the application from task manager, open it back up and performs the same steps.  Since the first query was not killed, it will still be running on the database.  So quite often you will see half a dozen of the same queries running on the database, all starting at different times.

To prevent this – it is important to assess new DB client software to see what it is trying to do in the background of the database.  What does “preview” mean?  Return 10, 100 or all the rows?  Is there a setting that restricts the number of rows returned; or a timeout function that prevents complex views running forever just returning a small set of data?

If these features are available – it is important to either set this as default in the application; or part of the initial user setup / training program.

6. The “I am going to copy this to Excel and Analyse the Data” Query

This is like exploratory queries that come up.  The user is running a query like:

SELECT *
FROM dbo.A_VERY_LARGE_TABLE;

“Why are you running this query?” I politely enquire.
“Oh – I am going to copy the data into excel and search/pivot/do fancy things with it”
“Ummm – this table has 8 billion rows in it…”

I can sympathise with queries like these.

The user thinks, “Why run the same query over and over again to get the data that I want to explore and present in different ways.  Let just get one large cut and then manipulate it on the client.”

What users don’t realise that dealing with a huge amount of data on the client side is difficult.  Exporting data out of the query client, into an application like excel can be frustrating with memory and CPU limitations compared to a spec’d-out server.  Plus, Excel is not going to handle 8 billion rows.

The requirements of queries like these needs to be analysed – do the users need that atomic level data?  Can strategic aggregation tables be provided to the user, so they have more manageable data to return.  Can different technology like OLAP cubes be built on top of large tables to answer the user’s needs?

5. The “I got duplicates so I am going to get rid of them with a DISTINCT” Query

This is a pet peeve of mine.

The user has duplicate in their results and don’t know why. Instead of investigating why there is duplication (usually result of a poor join), they just slap a DISTINCT at the top of the query and call it done.  Here is a simplified example that I regularly come across:

DECLARE @ACCOUNTS TABLE
(
	ORG INT,
	ACCOUNT_ID INT,
	BRANCH INT,
	BALANCE NUMERIC(18,2),
	PRIMARY KEY (ORG, ACCOUNT_ID)
);

INSERT INTO @ACCOUNTS VALUES(1, 18, 60, 50);
INSERT INTO @ACCOUNTS VALUES(1, 19, 60, 150);

DECLARE @BRANCHES TABLE
(
	ORG INT,
	BRANCH INT,
	BRANCH_NAME VARCHAR(20),
	BRANCH_STATE VARCHAR(3),
	PRIMARY KEY (ORG, BRANCH)
);

INSERT INTO @BRANCHES VALUES(1, 60, 'Branch of ORG 1', 'VIC');
INSERT INTO @BRANCHES VALUES(2, 60, 'Branch of ORG 2', 'VIC');


SELECT 
	A.ORG,
	A.ACCOUNT_ID, 
	A.BRANCH,
	A.BALANCE,
	B.BRANCH_STATE
FROM @ACCOUNTS A
JOIN @BRANCHES B
ON	-- B.ORG = A.ORG	-- User forgot this predicate in the join
	B.BRANCH = A.BRANCH;

Results:

ORGACCOUNT_IDBRANCHBALANCEBRANCH_STATE
1186050.00VIC
1186050.00VIC
11960150.00VIC
11860150.00VIC

The user looks – “Oh Dear – I’ve got duplicates! Let’s get rid of them.”

SELECT DISTINCT
	A.ORG,
	A.ACCOUNT_ID, 
	A.BRANCH,
	A.BALANCE,
	B.BRANCH_STATE
FROM @ACCOUNTS A
JOIN @BRANCHES B
ON	-- B.ORG = A.ORG	-- User forgot this predicate in the join
	B.BRANCH = A.BRANCH;

This poses many problems:

  1. The Database must work harder using incomplete joins on indexes to bring across the data; and then work harder supressing the duplicates
  2. If the field list change; then the distinct might not work anymore.

For example, with the above query – if the user brings in the column “BRANCH_NAME” then the duplicates return:

SELECT DISTINCT
	A.ORG,
	A.ACCOUNT_ID, 
	A.BRANCH,
	A.BALANCE,
	--------- Add in new column ---------
	B.BRANCH_NAME,
	-------------------------------------
	B.BRANCH_STATE
FROM @ACCOUNTS A
JOIN @BRANCHES B
ON	-- B.ORG = A.ORG	-- User forgot this predicate in the join
	B.BRANCH = A.BRANCH;
ORGACCOUNT_IDBRANCHBALANCEBRANCH_NAMEBRANCH_STATE
1186050.00Branch of ORG 1VIC
1186050.00Branch of ORG 2VIC
11960150.00Branch of ORG 1VIC
11860150.00Branch of ORG 2VIC

Preventing queries like this comes down to user experience and training.  If they see duplicates in their data – their initial thoughts should be “Why do I have duplicates?”   

Duplicates might be legitimate and a DISINCT might be OK – but quite often it is because of a bad join.  For an inexperience user – it might be overwhelming to break down a large table to find where the duplicates are coming from, and they might need help from a more experienced user.  This is where internal data forums are useful – where people can help each other with their problems.

4. The “Ever lengthening” Query

This is a query I see scheduled daily for users getting trends over time for a BI tool:

SELECT *
FROM dbo.A_TABLE A
JOIN dbo.B_TABLE B
ON	B.KEY_1 = A.KEY_1
WHERE
	--------------- Start Date ---------------
	A.BUSINESS_DATE >= '2025-01-01' AND
	------------------------------------------
	A.PREDICATE_1 = 'X' AND
	B.PREDICATE_2 = 'Y';

Initially it starts off OK – running fast and the user is happy.  But as time goes by; the query gets slower and slower; returning more and more data.  Eventually it gets to the point the query goes off into the nether and never returns. Since it is always starting the from the same point; it is constantly re-querying the same data over an over again

There are a couple of options you can handle queries like these:

  1. Really examine the requirements of the user’s needs. How much data is really relevant for their trending report? In 2027; will data from 2025 be useful for the observer of the report?
  2. Create a table specific for this report and append a timeframe of data (eg daily) onto it. The report than can just query this specific table and not having to regenerate complex joins over and over again for previous timeframes.

3. The “Linked Database” Query

With the growth of our Data Warehouse, other database on different servers started building process flows off our database; sometimes without out knowledge that they are doing this until we see a linked server connection coming up.

What pattern they use our database is where the problems lie.  One downstream database runs this query every day:

SELECT *
FROM REMOTE_SERVER.SOME_DATABASE.dbo.A_BIG_HISTORICAL_TABLE;

So, they are truncating a landing table on their database and grabbing the whole table.  Hundreds of millions of rows transferring across a slow linked server; taking hours and hours.  And with time; this will get slower and slower as the source table grows its history.

This is a tricky one to tackle with the downstream users.  Odds are if they design a cumbersome process like this their appetite for change to a faster (yet more complex solution) might not be high; especially if their process ‘works’.

If you can get around the political hurdle and convince them the process must change; there are many options to improve the performance.

  1. Use a ETL tool to transfer the data across.  Even a simple python script to copy batch by bath files across will be quicker than using a linked server.
  2. Assess the downstream use cases for the data.  Do they need the whole table with the complete history to satisfy their requirements?  Do they need all the columns; or can some be trimmed off to reduce the amount of data getting transferred?
  3. If the source table a historical table; only bring across the period deltas.  With the downstream process I am having trouble with now; they are bringing across twenty-one million rows daily.  If they only bring across the deltas from the night loads it brings across a whopping five thousand rows.  That is 0.02% of the total row count.  Add in a row count check between the two systems to have confidence that the two tables are in sync and the process will be astronomically quicker.

2. The “I don’t know where that query comes from” Query.

We have a continuing problem from a Microsoft Power BI report that runs a query that looks at data from the beginning of the month to the current date.  It was a poorly written query and therefore as the month went along its performance got worse and worse.

Since the username associated with the query was from the Power BI server’s account – we had no idea who the query belonged to as a couple of simple fixes could drastically improve the performance.

We contacted the team that manages Power BI server and asked them to find the query and update the sql code. 

They said it was completed, and the report sql was updated.

But soon the sql was back – running for hours and hours.

We contacted the team again –

“Hey that query is back.”

They try updating the report again; but soon the query was back again.

So, either someone was constantly uploading an original copy – or another copy of the report was buried somewhere on the server that the admins could not find. 

Since we do not have access to their system, it hard to determine what is happening.

The barbaric solution would be to block the Power BI user from our system; but goodness knows how many critical business processes that will disrupt. 

The best solution at the minute we are doing is just constantly killing the running SQL code on the database with the hope that someone will identify the constantly failing report.  This can be tricky as well if the Power BI server automatically tries rerunning failed reports.

1. The “I don’t listen to your advice” Query.

Disclaimer – this is a frustration rant that I have to get off my chest.

We have a user that constantly runs a long running crook query.  As in when I collated all the long running queries in research for this post – his was at the top by a significant margin.

The problem with the query itself is a join using an uncommonly used business key in a table that is not indexed.  The fix itself is quite simple fix; use the primary keys in join.

But he has been running the same code for months – locked our nightly loads several times and caused incidents.

We tried the carrot approach.  “Hey here is optimised code.  It will make your query run quicker and be less burden on our database.” Got indifferent replies. 

More locked loads we included his manager into correspondence, but she did not seem to want to be involved.

With my carrot approach communication, I gained the impression that he had a self-righteous personality and thinks his role and report is above question.

Enough was enough – it was the stick approach time.

We raised an operational risk against his activity.

The Ops risk manager asks, “Do you have evidence that you attempted to reason and help?”

“Yep,” I replied, “Here is a zipped-up folder of dozens of emails and chat messages that I sent him.”

Senior managers were involved; with my senior manager commenting to my manager in the background “Is that guy all there?”

Anyway, my manager wrote a very terse email saying to correct their query and if they crash the loads again; they will get their access removed from the database.

No committal reply.  No apology.  And to this day they are still running the same query; but just under the radar that it is not locking our loads. 

I am watching him like a hawk – waiting for the day that I can cut his access from the database.

It is a pity that our Database does not have a charge back capability of processes used.  I bet if his manager got a bill of $$$ from one person running one query; she would be more proactive in this problem. 

In retrospect, I would have campaigned to have his access cut right away and make him justify why he should have it back.  When there is hundreds of other people doing the right thing on the database; it is not fair that their deliverables are getting impacted by the indifference attitude of one user.