I am just writing a brief post about a conversation I was asked to join.
The question paraphrased:
For dates in an Microsoft SQL Server, when they get passed through Qlik Replicate to Kafka Avro – how are they interpreted? Epoch to utc? Or to the current time zone?
I didn’t know the answer so I created the following simple test:
This is a Mac + Cheese recipe that I learnt from the OMG BBQ course.
It is truly frightening how decadent it is; I have never seen so much cheese go into one recipe. And that’s not even considering the amount of cream to add to it.
This is the recipe dictated by the owner so I haven’t tried cooking it myself; but the results were the most gooey, artery clogging, heart stopping Mac n Cheese I have ever tasted.
The Legend behind the recipe
The legend where this recipe originates from is from a “gangster” living in America; involved in all sorts of violence and drugs. One day an epiphany came to him that he was heading towards a very early grave.
So he got out of crime, went to rehab to got clean and opened up a BBQ restaurant/food van.
But he was looking for ideas to expand his business.
Using his former life’s intuition and the opportunities that some US states has leagalised cannabis; he realised there was a market for good “munchies” food at cannabis festivals that are regularly held.
The problem with some food truck’s menu choice; hot chips are a bottleneck to the process. Quite often people are waiting for chips to be cooked.
He was searching for something that he can serve with BBQ meats, very quick and easy to make, not a bottleneck in a food truck business and satisfies the festival goers particular needs.
This is when he came up with this recipe for Mac and Cheese.
The Recipe
Guesstimate serves 8 people
Ingredients
500g Macaroni pasta shapes
1kg Tasty cheese – grated
50gm Parmesan cheese – grated
2 large eggs
1L Thicken cream
2 tsp ground Black pepper
2 tsp Garlic Powder (Guesstimate – chef did it by feel)
2 tsp Onion powder (Guesstimate – chef did it by feel)
2 tsp Smoked Paprika (Guesstimate – chef did it by feel)
Extra Paprika for dusting on top
Method
Pre-heat an oven to 160°c .
Cook the pasta to packet instructions. Drain the pasta and allow it to cool slightly.
Reserve a cup of grated cheese.
Meanwhile in a large bowl; add in remaining cheese, eggs, thickened cream and seasonings. Gently mix to combine.
Once the pasta is cool enough; add into the bowl and stir to combine everything.
Pour into a baking dish. Sprinkle over reserved cheese and dust evenly with paprika.
Place on a tray to catch the drippings and bake for 25min until lightly brown on top.
Rest for 5min and then serve
Notes
While I don’t think one’s cardiologist would recommend eating this every day; I do see some potential as using this recipe as a base. This would be a show stopping comfort food on a cold winter’s night; either served as a side, or as a main with some nice bread.
One thing I would change from the base recipe that we were taught is to under cook the pasta by a couple of minutes. This will prevent it from going too mushy in the final product.
Other things I was thinking:
Don’t use pre-grated cheese – the additives in it don’t make the cheese melt as well.
Add some creamy compatible vegetables like pumpkin, broccoli, mushrooms and/or corn to it.
Instead of smoked paprika, use nutmeg
Experiment with a different combination of cheeses to add a more complex flavour.
My introduction to my in-law’s Christmas traditions was “Nancy’s Noodle Salad”; a recipe handed down to my to-be wife that she would cook for an Australian Christmas lunch.
The whole trouble is with her excitement for Christmas; she would quite often cook way too much; and we’re eating “Nancy’s Noodle Salad” for a week afterwards.
But this salad is important; nostalgias of times gone by and remembering passed on love ones. And someone on this vast World Wide Web might pick up Nancy’s Christmas Noodle salad and start their own tradition with it.
Anyway – I can’t read the original recipe and I have to get my wife to interpret the hand writing.
The Recipe
Ingredients
Makes a side for about 4 – my wife quadruples it for Christmas
Main Salad component
250g of Short pasta shapes
1 stick of Celery – diced
Half a of a Red and Green capsicums – diced
Small can of corn
1/2 cp Sultanas
300g Bacon – diced
Dressing
1/2 cp Salad oil (I just use canola oil)
1/2 cp Sugar
1/2 cp Vinegar
1 or 2 tsp of Keen’s curry powder
Method
Combine all dressing ingredients into a jar with a tight lid. Shake to combine and then set aside
Cook the pasta to packet instructions. Once cook; rinse under cold water and drain well and place in a large serving bowl.
Add the bacon to a cold non stick frypan. Turn to medium-high heat, stir occasionally until while foam appears around the cooked bacon piece. Remove bacon from pan and drain on paper towels.
In the serving bowl, add in the diced celery, diced capsicums, the small can of corn, sultanas and bacon. This can be wrapped up and placed in the fridge ready to be served.
When ready to serve, mix the salad ingredients together. Drizzle over salad dressing to desired level and mix into the salad.
For over fifteen years I have worked with a Microsoft SQL Data Warehouse that is over twenty-five years old. Although it is ancient in this world of huge cloud-based Data Vaults – it churns out an abundant of value to the organisation I work for.
The problem with this value – it is opened to anyone who wants to access to build their queries off. This means we get a range of SQL queries running on the database. We have queries ranging from precise and efficient built queries that uses every trick in optimisation – all the way to “WTF are you trying to do?!” queries.
The second category is the one we have the most problem with. With limited resources of an OnPrem SQL server; some uses write atrocious queries without a perceived care for other people using the database. There is nothing more frustrating getting a support call out because of a failed data load; only to find the table is locked by someone running a query over the past six hours.
To counter this problem; I created a python script that checks what is running on the database every fifteen minutes. If a user has a query running for over half an hour, I get an alert to show what they are running.
This allows our team to:
Work out if someone is running a query that is blocking or taking resources away from our critical processes – resulting in timeouts
With our experience help users optimise poorly running queries so they have a better outcome
To stop any stupid queries from running
Since the queries are logged into a DB – I have four years of history of 14,000 unique queries running. This gives me a lot of learning experiences of what users are running and what problems they face.
From this learning – this is the Top 10 most horrible queries I see running on our database.
8. The “Discovery” Query
This query come from new and experience users exploring the data of the database. They will run something like this:
SELECT *
FROM dbo.A_VERY_LARGE_TABLE;
Look at the data, grab what they need and then just leave the query running in the background in their Server Management Studio. When we contact them an hour later; they act surprise that the query is still running in the background – returning millions upon millions of rows to their client.
This is an easy fix with training. Using the TOP command to limit the number of rows returned, use SP_HELP to return a schema of an object, provide a sandbox for people to explore in are all relatively simple fixes for this problem
7. The “Application Preview” Query
Our SAS application have a nasty default setting that if a user previews a data in a table; it will try and return the whole table – with the user completely unaware that this is happening.
Even worse; if the application locks up returning the data; the user kills the application from task manager, open it back up and performs the same steps. Since the first query was not killed, it will still be running on the database. So quite often you will see half a dozen of the same queries running on the database, all starting at different times.
To prevent this – it is important to assess new DB client software to see what it is trying to do in the background of the database. What does “preview” mean? Return 10, 100 or all the rows? Is there a setting that restricts the number of rows returned; or a timeout function that prevents complex views running forever just returning a small set of data?
If these features are available – it is important to either set this as default in the application; or part of the initial user setup / training program.
6. The “I am going to copy this to Excel and Analyse the Data” Query
This is like exploratory queries that come up. The user is running a query like:
SELECT *
FROM dbo.A_VERY_LARGE_TABLE;
“Why are you running this query?” I politely enquire. “Oh – I am going to copy the data into excel and search/pivot/do fancy things with it” “Ummm – this table has 8 billion rows in it…”
I can sympathise with queries like these.
The user thinks, “Why run the same query over and over again to get the data that I want to explore and present in different ways. Let just get one large cut and then manipulate it on the client.”
What users don’t realise that dealing with a huge amount of data on the client side is difficult. Exporting data out of the query client, into an application like excel can be frustrating with memory and CPU limitations compared to a spec’d-out server. Plus, Excel is not going to handle 8 billion rows.
The requirements of queries like these needs to be analysed – do the users need that atomic level data? Can strategic aggregation tables be provided to the user, so they have more manageable data to return. Can different technology like OLAP cubes be built on top of large tables to answer the user’s needs?
5. The “I got duplicates so I am going to get rid of them with a DISTINCT” Query
This is a pet peeve of mine.
The user has duplicate in their results and don’t know why. Instead of investigating why there is duplication (usually result of a poor join), they just slap a DISTINCT at the top of the query and call it done. Here is a simplified example that I regularly come across:
DECLARE @ACCOUNTS TABLE
(
ORG INT,
ACCOUNT_ID INT,
BRANCH INT,
BALANCE NUMERIC(18,2),
PRIMARY KEY (ORG, ACCOUNT_ID)
);
INSERT INTO @ACCOUNTS VALUES(1, 18, 60, 50);
INSERT INTO @ACCOUNTS VALUES(1, 19, 60, 150);
DECLARE @BRANCHES TABLE
(
ORG INT,
BRANCH INT,
BRANCH_NAME VARCHAR(20),
BRANCH_STATE VARCHAR(3),
PRIMARY KEY (ORG, BRANCH)
);
INSERT INTO @BRANCHES VALUES(1, 60, 'Branch of ORG 1', 'VIC');
INSERT INTO @BRANCHES VALUES(2, 60, 'Branch of ORG 2', 'VIC');
SELECT
A.ORG,
A.ACCOUNT_ID,
A.BRANCH,
A.BALANCE,
B.BRANCH_STATE
FROM @ACCOUNTS A
JOIN @BRANCHES B
ON -- B.ORG = A.ORG -- User forgot this predicate in the join
B.BRANCH = A.BRANCH;
Results:
ORG
ACCOUNT_ID
BRANCH
BALANCE
BRANCH_STATE
1
18
60
50.00
VIC
1
18
60
50.00
VIC
1
19
60
150.00
VIC
1
18
60
150.00
VIC
The user looks – “Oh Dear – I’ve got duplicates! Let’s get rid of them.”
SELECT DISTINCT
A.ORG,
A.ACCOUNT_ID,
A.BRANCH,
A.BALANCE,
B.BRANCH_STATE
FROM @ACCOUNTS A
JOIN @BRANCHES B
ON -- B.ORG = A.ORG -- User forgot this predicate in the join
B.BRANCH = A.BRANCH;
This poses many problems:
The Database must work harder using incomplete joins on indexes to bring across the data; and then work harder supressing the duplicates
If the field list change; then the distinct might not work anymore.
For example, with the above query – if the user brings in the column “BRANCH_NAME” then the duplicates return:
SELECT DISTINCT
A.ORG,
A.ACCOUNT_ID,
A.BRANCH,
A.BALANCE,
--------- Add in new column ---------
B.BRANCH_NAME,
-------------------------------------
B.BRANCH_STATE
FROM @ACCOUNTS A
JOIN @BRANCHES B
ON -- B.ORG = A.ORG -- User forgot this predicate in the join
B.BRANCH = A.BRANCH;
ORG
ACCOUNT_ID
BRANCH
BALANCE
BRANCH_NAME
BRANCH_STATE
1
18
60
50.00
Branch of ORG 1
VIC
1
18
60
50.00
Branch of ORG 2
VIC
1
19
60
150.00
Branch of ORG 1
VIC
1
18
60
150.00
Branch of ORG 2
VIC
Preventing queries like this comes down to user experience and training. If they see duplicates in their data – their initial thoughts should be “Why do I have duplicates?”
Duplicates might be legitimate and a DISINCT might be OK – but quite often it is because of a bad join. For an inexperience user – it might be overwhelming to break down a large table to find where the duplicates are coming from, and they might need help from a more experienced user. This is where internal data forums are useful – where people can help each other with their problems.
4. The “Ever lengthening” Query
This is a query I see scheduled daily for users getting trends over time for a BI tool:
SELECT *
FROM dbo.A_TABLE A
JOIN dbo.B_TABLE B
ON B.KEY_1 = A.KEY_1
WHERE
--------------- Start Date ---------------
A.BUSINESS_DATE >= '2025-01-01' AND
------------------------------------------
A.PREDICATE_1 = 'X' AND
B.PREDICATE_2 = 'Y';
Initially it starts off OK – running fast and the user is happy. But as time goes by; the query gets slower and slower; returning more and more data. Eventually it gets to the point the query goes off into the nether and never returns. Since it is always starting the from the same point; it is constantly re-querying the same data over an over again
There are a couple of options you can handle queries like these:
Really examine the requirements of the user’s needs. How much data is really relevant for their trending report? In 2027; will data from 2025 be useful for the observer of the report?
Create a table specific for this report and append a timeframe of data (eg daily) onto it. The report than can just query this specific table and not having to regenerate complex joins over and over again for previous timeframes.
3. The “Linked Database” Query
With the growth of our Data Warehouse, other database on different servers started building process flows off our database; sometimes without out knowledge that they are doing this until we see a linked server connection coming up.
What pattern they use our database is where the problems lie. One downstream database runs this query every day:
SELECT *
FROM REMOTE_SERVER.SOME_DATABASE.dbo.A_BIG_HISTORICAL_TABLE;
So, they are truncating a landing table on their database and grabbing the whole table. Hundreds of millions of rows transferring across a slow linked server; taking hours and hours. And with time; this will get slower and slower as the source table grows its history.
This is a tricky one to tackle with the downstream users. Odds are if they design a cumbersome process like this their appetite for change to a faster (yet more complex solution) might not be high; especially if their process ‘works’.
If you can get around the political hurdle and convince them the process must change; there are many options to improve the performance.
Use a ETL tool to transfer the data across. Even a simple python script to copy batch by bath files across will be quicker than using a linked server.
Assess the downstream use cases for the data. Do they need the whole table with the complete history to satisfy their requirements? Do they need all the columns; or can some be trimmed off to reduce the amount of data getting transferred?
If the source table a historical table; only bring across the period deltas. With the downstream process I am having trouble with now; they are bringing across twenty-one million rows daily. If they only bring across the deltas from the night loads it brings across a whopping five thousand rows. That is 0.02% of the total row count. Add in a row count check between the two systems to have confidence that the two tables are in sync and the process will be astronomically quicker.
2. The “I don’t know where that query comes from” Query.
We have a continuing problem from a Microsoft Power BI report that runs a query that looks at data from the beginning of the month to the current date. It was a poorly written query and therefore as the month went along its performance got worse and worse.
Since the username associated with the query was from the Power BI server’s account – we had no idea who the query belonged to as a couple of simple fixes could drastically improve the performance.
We contacted the team that manages Power BI server and asked them to find the query and update the sql code.
They said it was completed, and the report sql was updated.
But soon the sql was back – running for hours and hours.
We contacted the team again –
“Hey that query is back.”
They try updating the report again; but soon the query was back again.
So, either someone was constantly uploading an original copy – or another copy of the report was buried somewhere on the server that the admins could not find.
Since we do not have access to their system, it hard to determine what is happening.
The barbaric solution would be to block the Power BI user from our system; but goodness knows how many critical business processes that will disrupt.
The best solution at the minute we are doing is just constantly killing the running SQL code on the database with the hope that someone will identify the constantly failing report. This can be tricky as well if the Power BI server automatically tries rerunning failed reports.
1. The “I don’t listen to your advice” Query.
Disclaimer – this is a frustration rant that I have to get off my chest.
We have a user that constantly runs a long running crook query. As in when I collated all the long running queries in research for this post – his was at the top by a significant margin.
The problem with the query itself is a join using an uncommonly used business key in a table that is not indexed. The fix itself is quite simple fix; use the primary keys in join.
But he has been running the same code for months – locked our nightly loads several times and caused incidents.
We tried the carrot approach. “Hey here is optimised code. It will make your query run quicker and be less burden on our database.” Got indifferent replies.
More locked loads we included his manager into correspondence, but she did not seem to want to be involved.
With my carrot approach communication, I gained the impression that he had a self-righteous personality and thinks his role and report is above question.
Enough was enough – it was the stick approach time.
We raised an operational risk against his activity.
The Ops risk manager asks, “Do you have evidence that you attempted to reason and help?”
“Yep,” I replied, “Here is a zipped-up folder of dozens of emails and chat messages that I sent him.”
Senior managers were involved; with my senior manager commenting to my manager in the background “Is that guy all there?”
Anyway, my manager wrote a very terse email saying to correct their query and if they crash the loads again; they will get their access removed from the database.
No committal reply. No apology. And to this day they are still running the same query; but just under the radar that it is not locking our loads.
I am watching him like a hawk – waiting for the day that I can cut his access from the database.
It is a pity that our Database does not have a charge back capability of processes used. I bet if his manager got a bill of $$$ from one person running one query; she would be more proactive in this problem.
In retrospect, I would have campaigned to have his access cut right away and make him justify why he should have it back. When there is hundreds of other people doing the right thing on the database; it is not fair that their deliverables are getting impacted by the indifference attitude of one user.