July 2, 2010

Database Design 101c, #SQLU

Posted in Learning, SQL University at 9:00 am by David Taylor

Welcome Back Once Again!

For today, our last class together at SQL University, (#SQLU on Twitter, the brain child of Jorge Segarra, who blogs at The SQL Update Statement, and can also be found championing the Social side of DBAness as @SQLChicken on Twitter), let’s talk about keys, both primary and foreign, discuss indexing, and finally, how to get information from data. Busy day today, so let’s get into it!

Thanks for my involvement (one more time!)

I would like to once more express my heartfelt thanks to Chancellor Segarra for inviting me to lecture in this esteemed forum. I’d also like to thank Coach Larock for re-emphasizing what I am telling you about here – his lecture was better than all three of mine combined! Read it, if you haven’t already!

Where are my Keys?

Keys are important!

When we talk about keys in a database, we are talking about a couple of different things. Primary keys are those fields in a table which help define and delineate the table, like an order or customer number. These are not repeated in a table because they are used to identify each row of data, and so can be used to index a table, as we will discuss shortly.

Foreign keys are important, too!

Foreign keys are sometimes a tricky concept to understand. Tables are related in what are known as Parent-Child relationships, and it is the keys that define and control that relationship. In my previous lecture, we laid out a data model for Customers and Orders. When we normalized the table, we ended up with one table that had Orders, separated out from the Customers table, as illustrated below.


Cust# Order# Part# Desc
0001 001012 345 Disc
0002 01453 651 Bowl
0003 01232 345 Disc

 In this table, the Order# field is the Primary key for this table, and the Cust# field is the foreign key. It relates this table back to the Customers table, so that orders can be matched to customers.


Another thing Primary and Foreign keys allow besides relationships is a concept known as constraints. When the database is set up properly, the rows of data in both tables are constrained to each other. This means that the relationship between them is such that, for example, you can not delete a customer without first deleting that customer’s orders. Imagine if that constraint was not in place, and someone deleted a customer. The orders would still be out there, with a foreign key pointing to nothing.

Indexes, or how do I find things?

Primary Keys Define Indexes

The Primary keys that you define when constructing a table are generally also used to define the index for the table. Database indexing is a large subject unto itself, in this class we will just go over them briefly. Essentially, an index is how data gets found within a table, much like an index is used in a book to find information.

In a book, you would turn to the index to look up a certain word, and the index, in turn, would point you to the page on which that word was found. A database index works much the same way. When a table is queried, and the query contains a Where clause to limit data, the database engine takes the predicate of that Where clause and looks, or seeks, through the index looking for that predicate. Once found, the rest of the record is also located, and the query returns the data to the user.

Without an index, the database engine would need to search through every row of the table looking for the predicate before it could return the row in question, so you can see how much greater performance will be with an index.

Indexes can be defined as something other than the Primary key, but in general, that’s what the index is based on.

Getting Information from Data

Raw Data is hard to Read

When a database is designed properly, with the correct amount of normalization for the data, in its raw form it is not very intelligible to the human mind. There are keys and tables, bits of data here in this table, other bits in that table, and simply reading through a table will not get you very far if you want to know what’s going on in your business. Properly queried, though, there is real information in your tables for the business to keep tabs on their business, find trends in data, and help the business stay afloat and even grow.

All of this is made possible with the right set of queries to pull from the data. In my previous lecture, I showed how not to build a table, and would like to illustrate here why it shouldn’t be built like that. To remind, here is the obnoxious offender.


FirstName LastName Address City State ZipCode Order# Part# Desc
John Smith 123 Any St Big City MA 02134 001012 345 Disc
Jane Doe 456 Main St Small City MA 01234 001453 651 Bowl
William Smith 2 Mulberry Ln Small City MA 01234 001232 345 Disc

 I will ask again, for the sake of time, to imagine that there are more orders added to the right side of the table, 3 columns for each, and, further, imagine that a sales manager would like to know how many of what product is being sold to a certain city. The query would have to include lines that take each order column and collate them together into their own column, bringing the city only once for each row, and only then being able to get counts for each city’s orders. This would involve messy and low performing pivots or CASE statements to get the orders separated out, so bad I am not even going to try to right it out here, please just imagine it. And then imagine that this little table is thirty or forty columns wide with millions of records. The manager may as well start the query on Monday and hope for a result by the end of the week!

 In a properly designed database, though, the query would simply get the cities from one table, join to orders in another, possibly tying them together with customers as a binder, and his count of product orders per city could be returned in seconds. And if such data could be returned quickly, other ways of querying can be used also taking mere seconds each to get concepts like trends over time, trends over time and area, customer payment histories, etc.

I realize this has been a very simplified discussion, but (A) I’m up against having to take care of other things, and so must finish this lecture, and (B) this is a 100 level class, so I am just trying to get the concepts relayed to you, with the hope that you will endeavor to take up the ideas and go further with them yourselves in your homework and research on your own.

Wrap it up

Thanks so much for joining me!

This week has been an incredible experience for me being able to share with you all this information. I thank you so much for joining me! It is my sincere hope that you’ve gotten something to take away from it, if even only a single concept. If you have any questions, be sure to post them in the Comments, below, and I will answer them ASAP.

Check this out for homework

For homework, I am going to once again suggest someone *much* smarter than I to reference on the subject of Indexing. I am giving here a 100 level class; the lady to whom I refer can take you as far as you want to go. I speak here of the Incredible Kalen Delaney, who literally wrote the book, Microsoft® SQL Server® 2008 Internals, the definitive guide to the internals and architecture of the Microsoft SQL Server 2008 relational database engine. She can be found at http://www.sqlserverinternals.com/, and her blog is at http://sqlblog.com/blogs/kalen_delaney/default.aspx. I would simply suggest reading everything there, you won’t be sorry!

Thank you all for joining me this week, I hope I was able to teach you something you can use.


June 30, 2010

Database Design 101b, #SQLU

Posted in Learning, SQL University at 9:00 am by David Taylor

Welcome Back!

For today’s class at SQL University, (#SQLU on Twitter, the brain child of Jorge Segarra, who blogs at The SQL Update Statement, and can also be found championing the Social side of DBAness as @SQLChicken on Twitter), we’ll be delving a little bit deeper into gathering requirements, we’ll determine what our database will look like based on those requirements, discuss normalization of data and touch on a couple of different ways databases can be built.

Thanks for my involvement (again)

I know it sounds like I’m sucking up (and maybe I am, I’m not telling!) but I would like to once more express my heartfelt thanks to Chancellor Segarra for inviting me to lecture in this esteemed forum. It can’t be said enough what a great opportunity this is, and I am greatly appreciative.

Business Requirements

Why are they important?

There are many ways to build any database, but they all start with one question – what data will they hold? This is determined by the requirements of the business. In a lot of cases a business will live or die on its data accuracy and completeness, which should illustrate its importance. I know I am sounding like a parrot, repeating the words “business requirement” over and over, so let’s take this time to define what I mean.

What are they?

Business requirements simply means this: What data does the business need in its day to day operations? Taking some examples from my last lecture, if we’re designing for a financial institution, then our database will be holding customer names and other identifying information, financial transactions and the like, including information dealing with the institution itself, and how all the customer transactions affect the overall business. If we are designing for some sort of retail outlet, we again will probably have customer identifying information, along with their orders and payment methods, and we’ll probably also have some part of the database devoted to keeping track of the business’ stock and supplies.

In the initial stages of design, the database designer should sit down with the business’ management, and ask what sorts of data they expect to deal with. Some parts of it, as I mention above, will be obvious and quick to go over, some not so much. In the retail scenario, you may run across details such as the warehouse location system, the different methods of shipping, etc. Asking for further and deeper detail will make the design process go easier, as it is better to have all of the entities (defined as pieces of data, such as customer names and addresses, or warehouse locations) to be recorded defined at the outset.

What managers/customers want vs. what they need

I used this heading in my last lecture, but it bears repeating. Managers and customers have an idea what they want, but they don’t know what they need! A retail manager will tell you that the business needs to know a customer’s address, this is obvious. What may not be so obvious is that customers can, and do, have more than one address. In addition to possibly separate billing and shipping addresses, customers can have more than one address to ship to; they may be shipping gifts to other people, etc. This means that we need to take into account multiple addresses for each customer. This is only one example, but it points out that only a very thorough and detailed conversation about the business’ use of data will yield what their data needs are, and the data professional building the database needs to be savvy enough to understand the underlying data needs.

Requirements Gathered, Now What?

The Entity-Relationship Model

That was a mouthful, what is that?!?!? Entities are groups, or clusters, of information, such as the group of information that defines customer identifying information, names, addresses, etc. As we’ll discuss in the normalization section, how different entities relate to one another is the relationship part of that tongue twister, for example, the orders that a customer makes – orders and customers are related to each other so the business can tie them together. Finally, the Model part of the heading refers to how the database is initially conceived – as a model, not necessarily a true database.

Database modeling is the act of defining the entities, filling those entities with attributes; that is pieces of data that help define the whole, such as first name, last name of a customer, then finally relating those entities to each other. It is at this point in the process that database design tools come into the picture. You’ll remember from the first lecture that I stated that these tools can run the gamut from a napkin and pencil to very expensive, complicated software that can model the data then actually build the database for you!

The Chicken and the Egg

I got stuck at this point in writing this lecture, unsure whether to dive into Modeling tools, or Normalization first. The two sort of go hand in hand. The tools build the relationships; the relationships are determined by the level of normalization. It was when I finally wrote that last sentence that I realized we need to discuss normalization first, tools later. So…

Normalization can’t be stressed enough

What is this Normalization?

Normalization is the result of normalizing data. Clear as mud, right? By normalizing data, we remove repetition from within the data. Let’s work through an example. If I need an entity that describes a customer, what should that entity contain? At minimum, it should have at least the following columns, or fields, of data: First Name, Last Name, Address, City, State, and Zip Code. I’m in the US, so for purposes of discussion, I will ignore other country’s systems of address postal boxes or residences. Further, there may be more detail required, but these fields will suffice for discussion.

Now, an entity, or table, with the above fields, looks on the surface to be perfectly valid, but let’s add some data to it:


FirstName LastName Address City State ZipCode
John Smith 123 Any St Big City MA 02134
Jane Doe 456 Main St Small City MA 01234
William Smith 2 Mulberry Ln Small City MA 01234

 This is, obviously, a lame data set, but it’s enough to illustrate the point. Look at the repeat data within the data set. People with the same last name, people in the same city. While it doesn’t look like much here, how much repetition will there be with thousands of customers? And, what if there are orders, in the same data? Look at this abysmal example of an entity:


FirstName LastName Address City State ZipCode Order# Part# Desc
John Smith 123 Any St Big City MA 02134 001012 345 Disc
Jane Doe 456 Main St Small City MA 01234 001453 651 Bowl
William Smith 2 Mulberry Ln Small City MA 01234 001232 345 Disc

 Again, lame, just follow me. What do we do if there are more orders than one for a given customer? Add more columns? How do we go about finding those extra orders later? And can we use this type of system to keep up with inventory? I hope you see the answer to these questions.

The answer is normalization, and the way to do it is through what are known as keys. Keys are like codes that define different parts of the data, allowing us to match data in one entity to data in another by way of relationships. The quickest way to explain is by illustration, and I will illustrate the concept by showing you the result of normalizing this set of data into the following related entities:


Cust# FirstName LastName Address
0001 John Smith 123 Any St
0002 Jane Doe 456 Main St
0003 William Smith 2 Mulberry Ln


Cust# City#
0001 001
0002 002
0003 002


City# City State ZipCode
001 Big City MA 02134
002 Small City MA 01234


Cust# Order# Part# Desc
0001 001012 345 Disc
0002 01453 651 Bowl
0003 01232 345 Disc

 It looks like more data than we started with, and it is, for this small data set. It also can be normalized further, as I will discuss in a moment. But you can already see the advantage of this system as the data set grows and the data gets further normalized. There will be fewer repeated rows of information, as illustrated in the Cities entity, and, once the orders entity is normalized to separate out part numbers and descriptions into their own entities, even less.

The Normal Forms

I will just touch on this subject, and allow the student to follow up – in fact, one of my fellow professors will be in this week with a more detailed article on normalization, in more fact, I am honored to be partnered with this particular professor this week – I will leave his identity a surprise. For now, though, realize that there is what is known as different Normal Forms, called First Normal Form (1NF), Second Normal Form (2NF), etc., right on up to Sixth Normal Form (6NF).

E. F. Codd, the inventor of the relational database, came up with the set of rules to define how data should be organized within a database, these are the Normal Forms. Essentially, each hierarchy of Forms reduces repetition further. The Normalization illustrated above is really only 1NF. Breaking out the cities into their own table with their own keys starts into 2NF. As you go up through the Forms, at the top you are left with only two columns per entity, the relational key and the data, such as a City# (key) and City (data). While it looks strange at first glance, the more Normal your database, the easier it is to query.

Data Modeling Tools

Now that we have some understanding of Normalization, what tools are available to us to build a Normalized Database?

The Napkin and Pencil

I love the fact that this set of tools is valid for building databases – it emphasizes that databases are, at their base, intuitive things. What we’ve done above is really a napkin and pencil database design, even though it was built in a word processor. It was just a quick jotting of what data is needed, and then a breaking down, or normalization, of that data. The same can be done with pencil and paper. A bit more time consuming than some other ways, but still valid.

The Big Guns – Professional Data Modeling Software

There are quite a few commercial and free data modeling software packages available to the designer. These can take the input of data needs and visually design the database on screen, ending in actually creating the Tables, Primary and Foreign keys and other objects in a ready to use database. Prices range from nothing all the way up to prices so high websites won’t divulge them – you need to contact the company’s “Sales Engineer” to find out how much. I say, if you need to ask how much, you probably can’t afford it!

An Option, and a Caution

Within SQL Server is a Database Diagram tool, within which you can lay out tables and keys, and relate them to each other. This is an option for the intrepid designer, which Microsoft obviously thought, as they include it in the package. There is one caution you must remember if you use it, though – it builds the database as you diagram it, while you diagram it! This means that SQL Server is changing and rearranging your database as you make changes to it, and if this is a production system you are doing it on, your overall server performance will suffer. Use it if you will, but be warned of the consequences.

Wrap it up

What we’ll cover next time

For our next class, we will discuss creating the physical database (so far we have only really talked about building a model of the database), we’ll talk about keys a bit more in depth, and their less thought of brothers, indexes, and we’ll talk about some methods of getting information from data, to illustrate why the design is important, something I said I would do today, but realized I needed to cover a few more subjects before we got there..

Check this out for homework

For homework, I am going to go out on a limb and suggest someone *much* smarter than I to reference on the subject of Database Design. I am giving here a 100 level class; the gentleman to whom I refer can take you as far as you want to go. I speak here of the Great Buck Woody, who is a SQL Server MVP and is currently a SQL Server Technical Specialist at Microsoft, and point you to his starting article entitled Database Design: Requirements, Entities, and Attributes. There are links at the top and bottom of the article to go Next and Back, I would suggest you use them.

Until next class, then, enjoy yourselves.

June 28, 2010

Database Design 101a, #SQLU

Posted in Learning, SQL University at 9:00 am by David Taylor

What is #SQLU

SQL University, known as #SQLU on Twitter, is the brain child of Jorge Segarra, who blogs at The SQL Update Statement, and can also be found championing the Social side of DBAness as @SQLChicken on Twitter. The community owes Jorge a deep debt of gratitude for making #SQLU possible. The gathering and dissemination of knowledge is one of the highest achievements to which a community can aspire, and it takes great leadership to do such, which is why, I am sure, Jorge has been nominated for the prestigious Exceptional DBA Award.

Thanks for allowing my involvement

I would like to take this opportunity to express my heartfelt thanks to Chancellor Segarra for inviting me to lecture in this esteemed forum. It is said that everybody with some knowledge is capable of sharing that knowledge; I hope I can do this University the justice it deserves.

Welcome back to Monday, I hope you enjoyed the weekend

I know Professor McCown left you Friday with a lot to think about regarding women in technology. I hope you found time in your busy lives to take initiative with any women you know that are teetering on the edge to get them into this field, and will always keep that in mind.

This has been a busy semester for you all, and I know there has been a lot covered. What I’d like you to focus on this week might be construed as a bit more basic than some of the technical lessons you have received, but I feel that getting the fundamentals right will do worlds of good when those higher steps are taken.

To that end, this week’s class at SQL University, we are going to discuss database design.

Plan for the week

My plan for this week is to start with a high level overview of database design, then discuss further aspects of design, such as why it is important, who will be using it, how to implement it, what tools are available to help the designer, and what the consequences of both good and bad design, or even no design could be.

There will be no test at the end; your own work will show you how well you have absorbed the lessons. There will be homework every day of class, and keep in mind that Coach Larock will probably also have work for you to do.

Database design is a complex subject, but, as I have mentioned, getting it right is fundamental to your database being usable, having good performance, being scalable, and, above all, being pertinent to the customer.

The 50,000 foot view of DB Design

Why do we care

We data professionals need to care about the design of the databases under our care for multiple reasons. First and foremost, we make our livings from these entities, and we must justify our earnings. Also, the majority of us are charged with the maintenance of the databases we work with, and good design facilitates easier maintenance. Finally for this discussion, we all hope the businesses that entrust their data to us will grow, and the databases underlying that business will very likely grow along with it. A well designed database can handle multi-fold growth in stride – correct steps taken at the outset will allow later growth to be unimpeded.

What managers/customers want vs. what they need

It is a truism that managers and customers have an idea what they want, but they don’t know what they need until what they want has been in implementation for some period, and they can see how things work. The database designer needs to know at the outset the end user’s business needs, rather than what the customer thinks she wants, because it’s what is needed that matters. We will spend a fair amount of timing discussing how to determine those needs when we get into discussing Business Requirements.

What tools can help us out

There are several tools available, both free and not, for assisting with database design. Some are better than others, and, as in any aspect of software design, there are defacto standard tools out there. These tools can range all the way from the back of a napkin and a pencil to a full blown ERM (Entity Relationship Model) design software tool. We will briefly touch on several in this range as time allows throughout the week.

Why be Normal?

Normal in this case does not refer to where one stands in society! Rather I am talking here about data normalization. This is a vast subject unto itself, with a great number of great minds having chimed in on the subject over the years. We will be discussing the basics of normalization, the different normal forms and when one would choose the different forms as the database gets designed. Normalization is data simplification, an elimination of repetition of data within the data store.

Does it fly?

Again, I am alluding to a concept here. Databases can scale to incredible heights. Transactions per second can be in the thousands, as can the number of concurrent users. Good design will allow the database engine to handle such large numbers of transactions while not so good designs can cause such conditions to bog down servers irrevocably. This is probably one of the most crucial reasons for learning good design.

Why worry about Database Design

They’re just Excel tables inside SQL Server aren’t they?

As mentioned, most managers and business data owners think they know what they want. Just about anyone in the head office can open a copy of Excel and start typing in a collection of information and generate useful reports from that information, all within Excel. This causes those same people to think that what is deep inside SQL Server is of the same nature, that the data professional is simply building spreadsheets for users to store data from their various forms.

If you’ve not yet been exposed to good database design, you may think the same thing. I will make a confession, here, when I started out is SQL Server, I did the same. Each group of pieces of information was contained in single tables, and I thought the power of SQL Server was only that it had great facility to slice and dice the data. What I found was that it was very difficult to write queries against such tables to obtain the necessary consolidation of data into useable information, and further, I found that such arcane and convoluted queries suffered terrible performance.


Which is the reason I gave performance its own heading. SQL Server is fast. Unbelievably, almost magically, fast. If what you are doing with it is designed correctly. To give a concrete example, one of my own business tasks was to generate a report of yields from a manufacturing process. In the beginning, with my ‘Excel tables’ and convoluted queries, it took almost two minutes for the report to be rendered each time it was opened. The management team relying on the report was not happy, to say the least. No one wants to stare at a spinning indicator for two minutes, waiting for a daily report to appear.

I learned, then employed, the concepts I will be teaching this week, and found that a properly designed database yielded simple querying techniques which reduced those two minutes to sub-second performance. Because of that (and other things), I remain employed.

Business Requirements can’t be stressed enough

Performance, normalization, querying techniques – none of these matter if requirements are not met. Remember that the business’ data in many cases *IS* the business, and that without it the business would at best suffer, and at worst, cease to exist. The data within the database needs to be understood by the data professional, in enough detail that proper tables can be constructed to house the data efficiently, and that querying devices such as views, stored procedures, etc. can be easily built to gain real information from the raw data to assist the business owners and employees in the success of the business.

Talking to customers to get the requirements

The only real way the data professional can understand the business and its data sufficiently is to simply talk with the customers. The customers, in this case, means any consumer of the data, be they someone that hired you as a consultant, or they are simply higher up in the chain of command within the business. Meeting with the producers and users of the data to gain insight into what the data is and how it will be used is often the best and sometimes the only way of gaining enough understanding to properly design the database.

Understanding the business

Understanding the business is the next level of detail required. Along with tables, queries and the like, the database designer needs to also consider such concepts as security access and data usage to answer such questions as who can see and interact with the data, and what parts of the data will be used for which parts of the business? Again, these are parts of the Business Requirements which can be gleaned and understood from meeting with and talking to the customers and managers and users of the data.

Understanding the data

A corollary to understanding the business is to simply understand the data itself. Is it a collection of financial data, as in a bank or mortgage loan business? Is it orders and customers, as would be found in a retail business? These types of data are vastly different both in their collection and their usage, and understanding, at least in general, if not in detail, the data that will make up the content of the database to be designed will end in a vastly better database than one which was designed without that knowledge.

Wrap it up

What we’ll cover next time

For our next class, we will delve into actually building a (simple) database, based on requirements that I will describe. We’ll build it two ways, one bad and one better, so you can learn to discern the difference. We’ll cover normalization in a bit more depth, and I will describe some different ways of building the database, utilizing a couple of the different tools available. We’ll discuss also some methods of getting information from data, to illustrate why the design is important.

Check this out for homework

In the meantime, it will be helpful for you to learn what not to do, before we even start to do anything. To that end, I would like you to read a piece by  Dr. SQL himself, Louis Davidson, author of SQL Server 2005 Database Design and Optimization. The specific blog post I would like you to read is entitled Ten Common Database Design Mistakes. There is a wealth of information to be had here, and I would like you to have this information in mind as we discuss the proper design of a database.

Until next class, then, enjoy yourselves.

June 25, 2010

SQL Community; PASS Call for Nominations; Exceptional DBA;

Posted in SQL Community at 10:00 am by David Taylor

I am a person of many hats. I am, first and foremost, a father; I am a Quality System Analyst at work; I am a woodworker by hobby; I am a SQL Server DBA / Developer. I am also a member of the SQL Community. I have found that community a terrific group to be a part of. I have made many friends through the community; I have gotten the opportunity to present at a SQL Saturday, I have gotten help through the #sqlhelp tag on Twitter, I am and, if I play my cards right, trust to luck but be sure to cast out my nets (points if you get the reference!) I will soon have a new career simply because I am part of this community. All of these things are of immense benefit, freely given by everyone in the community, and all this and more are available to any who find themselves a part of this community.

But what I feel is the largest and best benefit of being part of this great group of people is the many and varied ways we can all give back to the community. In my last blog post, I mentioned one way I am giving back; I get to be part of the faculty of SQL University. That’s a great honor. Another, ongoing way I give back, and I am not sure how many know this, but I am a Professional Association for SQL Server (PASS) Virtual Chapter Volunteer. I am the Volunteer Coordinator for the Application Development VC. In this capacity, I moderate the LiveMeetings twice a month, and in between work with the team of volunteers that really make the VC great, Speaker Coordinator John Jakubowski (Blog | Twitter), Marketing Coordinator Aaron Nelson (Blog | Twitter),  and Website Volunteer Eric Humphrey (Blog | Twitter).

My blog post today is from the standpoint of a PASS Volunteer. I think if you are part of the SQL Community, you should also be a Member of PASS. If you’re not, go to sqlpass.org right now and sign up – go on, I’ll wait… It’s free; they have terrific learning resources, and are associated with the best in the business. If you’re a member of PASS (and I’m sure that if you weren’t a few minutes ago you are now, right?) then you should take an interest in the governance of PASS. You can read the Board minutes, what the Board is planning, all kinds of great info on their Governance page (login required – I *told* you to sign up, didn’t I?). Further, you should take an interest in who is on that Board.

That is the subject of the first part of my post today. I received an email today that started “The 2010 PASS elections process kicked off this week with the Call for Nominations” asking the email recipients to help get the word out, through emailing, blogging, Tweeting ( hashtag #passvotes), etc. Now, I am seriously as apolitical as they come, so you know I’m not writing this to toe any party line or whatever other clichés any arm chair politicos spout from their La-Z-Boys during the evening news. PASS is something I believe in, wholeheartedly. I can honestly say that my life has gotten better and easier since I have been a member of both PASS and the SQL Community. I feel good asking you to go to http://elections.sqlpass.org to find out more and see what you can do.

Let me paste a couple of paragraphs from that email, this is interesting stuff…

If you’re considering running for the Board, we strongly urge you to apply. The more applications we receive, the better the elections process will run. More importantly, though, PASS would like to see its most talented members in leadership positions. If you think you or someone you know is an ideal candidate, please review what it takes to be a Director, download the application and apply before the July 21 deadline. Click here for other important dates.

As a PASS Director, you would be responsible for the day-to-day activities of the organization, as well as setting the course for the organization on short- and long-term decisions. The primary directive of the Board is to focus on the key strategic issues of the SQL Server community by providing members with opportunities to advance their technical and professional skills, network at the local and international levels, and interact with the industry’s most accomplished users and experts.

You can read more and stay up to date on the elections process at http://elections.sqlpass.org. Please feel free to keep in touch with the Elections team at HQ (consisting of Johannes Bezuidenhout, Governance and Web Content Coordinator and Nancy Nasso, PASS Community Coordinator). You can also voice your opinions via Twitter (#passvotes) or visit the elections discussion forums at http://elections.sqlpass.org (to be launched in early July).

Thanks in advance for staying involved. PASS couldn’t run without your dedication and support!

The second part of my blog post today I would like to spend pimping promoting some of the folks I have met since being in the community, and tout up the great job they have been doing. Yes, I am talking about the Exceptional DBA Award finalists that were announced yesterday. Go on over to http://www.exceptionaldba.com/ and check out the finalists. I can honestly say I have had the honor of actually meeting two of them, and have been blessed and honored to be called friend by at least one of them. All six are terrific in their own right. Check out the site, read their bios, then go and read their blogs, you’ll see all of them deserve the right to be called Exceptional DBAs. And when you’re done reading, VOTE!

This is such a big deal that it was picked up by big news organizations! Check out what CNBC has to say about it on their site… http://www.cnbc.com/id/37868062.

And with that I am going to go put on one of my other hats, and get back to work. Thanks for reading, and see you next week in class at #SQLU!

June 3, 2010

I’m a #SQLU Faculty Member!

Posted in Learning, SQL Server, SQL University at 10:00 am by David Taylor

I read a blog today, about blogging, and one of the takeaways I got was that spending words apologizing for not blogging (or not blogging often enough) is a waste of your time and mine, so I will stop wasting your time.

This blog’s about learning and it’s about SQL and about learning SQL. They say the best way to learn is by teaching (See Buck Woody’s post about this). In that vein, I have taken on a teaching position. Maybe a small one, in the larger realm of things, but you gotta start somewhere.

I will be taking a week of lessons as a faculty member of good old #SQLU, SQL University, that awesomeness created by Jorge Segarra, better known in the Twitter world as @sqlchicken. My week will be all about Database Design, and, if you saw the databases I manage, you would cry, but it’s been a great learning opportunity, so I feel qualified.

So if you haven’t been following #SQLU, get over there and get your learn on! This semester has had some of the greats of the SQL World, with subjects so far covering Documentation, Professional Development, Security and Auditing, Tools of the Trade, SSIS Design and Development, PowerShell, Parallelism and Testing and Refactoring. Some heady subjects, lots to learn. And don’t forget to also get over to Thomas Larock’s (@SQLRockstar on Twitter) blog for the weekly Basketball practices, where he teaches you to use the knowledge you are gaining.

I look forward to serving you, the students of #SQLU, and working with the Faculty already in place. And maybe I’ll even get me a #SQLU Faculty Member Tee shirt to go along with my #SQLU Alumni shirt!

May 10, 2010

#sqlsat38, from an attendee’s seat

Posted in Learning, SQL Saturday, SQL Server at 10:00 am by David Taylor

I had a thought for a blog post while talking with Jorge Segarra (Blog | Twitter) regarding SQL Saturday, which I will get to at the end of this. I am writing this while actually in the sessions at SQL Saturday 38, and thought I would write the recap as I experience it. The disconnectedness should be brilliant! {Edit before I post this – The tenses change from present to past, and each session description was written throughout the session, so don’t expect cohesiveness in this one! ~David} 

Everybody does the “registration was quick/slow” “they need more signs!” what they did right/wrong type of post. Let me tell you that, up until this first session I’m in, so far these types of things are going really well. From my perspective, having attended now 5 SQL Saturday events in 7 months, what I really am into is the networking and the learning (see title of this blog!) 

Left to Right, me, @sqlchicken's signature rubber chicken, Tory Gallant (@gratefuldba), Eric Humphrey (@lotsahelp)


During the hour between registration opening and the first session, there was a great crowd of people milling about in the lobby of the building. During this hour, I got to run into at least half a dozen folks I had met before at various events, and then I met a few I never had before. One guy I met even told me about a job available, and even had the job description paperwork with him. Now, I know SQL Saturday is a learning adventure, but job leads aren’t a bad side effect! 

So I’m right now as I write this paragraph in Jorge’s session of SQL University 101, first session of the day. I know I don’t need the 101 session, but I like Jorge, and he often goes off on higher level tangents. In fact, while I am typing, he’s talking about internals and SAN alignment. There are a large percentage of women in this session, great for WIT (Women in Technology). He’s also covering Twitter, and #sqlhelp, and the other great resources out in the interwebz for SQL Pros. Throughout, the SQL Community was emphasized (pimped) to those folks that don’t know about it. Amazing how many in the industry don’t know about the resources available to them, much like me a year ago! 

Jorge Segarra (@sqlchicken) with his famous signature rubber chicken


Next, I stayed in the same room to listen to Brandie Tarvin (website | Twitter) present on Joining and Aliases. Again, a bit of a beginner session, but at this time pretty much all the sessions are, and I spoke to Brandie before the session, and she said that she might go more advanced depending on response, so there. The room was SRO, with more than a dozen sitting on the floor or standing around the edges. She’s very knowledgeable in the subject, covering from basics through advanced joining of tables. Tips like swapping INNER and OUTER to see where missing data might be. Good session overall. 

Brandie Tarven (@WannabeWriter06) answering questions


After this session was lunch, pizza on the piazza 🙂 During lunch were several vendor sessions, and also several mini-sessions. I actually missed most of these because I was hob-nobbing with the twitterati on the piazza (learning AND networking, remember?) but I did jump in on “Introduction to Reporting on a Cube with SSRS 2008” by Dustin Ryan. I sat way in the back because I came in late, and he had no mike so it was hard to follow, but it looked interesting. MDX is not my strong point, but it was nice to see some in action. 

Pizza on the piazza


Next up was “Demystifying SQL Deployments” a presentation given by Rodney Landrum (Twitter), a SQL MVP. Starting his talk about SOX and HIPAA, I knew I was in the right session. These things really need to be learned. As he went along, he discussed the various staging environments, Dev, QA, Staging to Production, then went on to talk about change management and workflow. I don’t deal with ANY of this where I work, so I was glad to see it covered. He ended with demos of different ways of packaging data to move it, including scripting databases and Red Gates’ SQL Packager. Very cool session. 

Rodney Landrum (@rodney_landrum) presenting


Following that was Aaron Nelson’s “Virtualize This!” session. I have seen this on the schedule for the last three SQL Saturday’s, but it was always trumped by another presentation. This time, I finally saw it. I sat in the “Heckler’s Row” with Jorge Segarra, Eric Humphrey (Blog | Twitter), Andy Warren (Blog | Twitter) and Jose Chinchilla (Blog | Twitter). Configuration looks ‘fairly’ easy, the box it’s running on needs to have some oomph for sure. During it, Jorge tweeted “The Demo Gods are not pleased!” should give you an idea how things were going at that point. A very interesting session, one that went just over my head ( a good thing) so I have stuff to look up so I  understand. 

Aaron Nelson (@sqlvariant) presenting


Heckler's Row!


The next to last session of the day was Plamen Ratchev’s (Blog) “Refactoring SQL for Performance.” This is another of those subjects I consider “good to know.” He started by debunking the myth of ‘If it ain’t broke, don’t fix it!’ One type of refactoring was normalization of data in-place (create new tables and move data, then create a view referencing the new table, named the same as the old table) then the queries can be simplified. Next was using upgraded features, when applicable, such as ROW_NUMBER / OVER ranking and the new MERGE statement. “To improve is to change, to be perfect is to change often” ~Winston Churchill. This was a very detailed session by a very knowledgeable presenter. 

Plamen Ratchev Presenting


The final session of the day for me was the SSIS Iron Chef competition. 

Iron Chef SSIS


This was done in Tampa, at SQL Saturday #32, but I mostly missed it there. When I walked into the room, epic music was playing on the sound system, Brian Knight (Blog | Twitter) and Adam Jorgenson (Blog | Twitter) were preparing their tools, anticipation was in the air! Once it started, these people put on a show! A mix of Iron Chef America and some comedy troupe show. Devin Knight (Blog | Twitter), the challenger, chose from among four “chefs” (including Troy Gallant in the role of Donald Dot Farmer!) to compete with his own blood, Brian Knight. The jokes about each other’s SSIS packages (Swollen, red packages, etc.) were rampant. Brian had his ETL written in about 20 minutes, Devin about five minutes later. Brian built his data warehouse, and Devin, falling behind, jumped right to the new powerpivot, using that as a data warehouse. And he finished it in a matter of seconds, with charts, while Brian was building reports. Brian then broke out the maps, spatial data visualization aids available in R2. With one minute to go, charts were popping up on both screens, and the crowd counted down the last five seconds. Brian and Devin explained how they did what they did, and the judges deliberated. They gave their comments, and chose the Iron Chef Jacksonville, one of the chefs that wasn’t chosen, Mike someone or other. I didn’t get it, but hey, the rest was fun. 

The Iron Chefs line up


Once the competition was over, the attendees that had been in the other sessions were brought into this room for the closing ceremonies. Scott Gleason recognized speakers, volunteers and attendees, with a special shout out to all the women who showed up, pointing out the Women in Technology section of PASS at wit.sqlpass.org. Then the loot was given out. There was a Woz signature iPod, a Flip video camera, a $100 gift certificate to Best Buy, books, shirts, licenses for software, great stuff. Once all was awarded, they announced the after-party at a place called Seven Bridges, with free billiards and appetizers, but by this time I felt I had to bow out. I had a five hour trip in front of me, and had been up since early in the morning, so I just went to see the beach, then got on the road home. 

Happy attendees leaving the event


Oh, and the thought I had last night that actually prompted this post is this; the SQL Saturday website is a terrific learning resource! One negative thing about every SQL Saturday is that you can’t attend every session. And, for most people, one can’t attend every SQL Saturday. But at the SQL Saturday website, (if presenters are diligent) there are the session slide decks, and often scripts that illustrate those decks. If you miss a session, you can usually get the jist of it from the download, and there are often resources in those decks that point out further information. As a knowledge repository, the SQL Saturday website is hard to beat! 

One final thought, added just before scheduling this post – Jacksonville is a really nice city. I’d never been, but I took a little time before getting on the road to see a bit of it, and I really like it. Except for the heat, it reminds me a lot of home (I’m from Boston and environs). I’d like to go back sometime, if I get the opportunity. 

Jacksonville Beach, the afternoon of #sqlsat38

May 5, 2010

Speaking for #sqlpass

Posted in SQL Server at 10:00 am by David Taylor

I am a fortunate soul. I have seen Aaron Nelson’s (Blog | Twittermap of all PASS chapters in the US, and there is one near me. Fifty miles away, but check the map, that’s nearby, relatively. Not only that, but the chapter’s former president, Ken Simmons (Blog | Twitter), moved away last fall, making its future an unknown, but Jon Boulineau (Twitter) stepped in to fill the gap, and the chapter had its ‘re-inaugural’ meeting last Tuesday, April 28, 2010. Finally, fortune smiled on me because when Ken and Jon were in discussions about handing over the reins over the winter, they sent me an email, asking me to present for the group at that meeting.

This was a big deal for me. Ken is huge in the SQL Server world, an author, an MVP, a holder of five SQL certifications. Jon is the new president of the PASS chapter. These guys got together and asked, of all the people they could have asked, me to present at the first meeting after Ken left. Naturally I said yes! I have only presented once before, at SQL Saturday #32 in Tampa, just this past January, but I could not pass up this opportunity to get more involved, to help my SQL Community friends, to Connect, Share and Learn. And I did, in spades.


The evening started simply enough, we met in a meeting room at the main branch of the Columbus library. Jon had provided some snacks and drinks. There were only a few people there when I arrived, so I shook a few hands, met a couple new people, and set up my borrowed-from-work laptop on the podium. When we got started promptly at six, there were only about eight of us in the room. Not a great turnout, but, as I found out, not bad, either, as it had been a few months since the last meeting, and people had stopped paying attention to the website, etc. so the only folks there were those that knew Jon or word of mouth friend of friends.

Jon started with a question to those few of us present – what do we want the group to be? We had a lively discussion about the role of SQL Server in our lives, and discussed even other RDBMS products, even including NoSQL in the talks. We brought up many points about how some of the people in the room supported several vendors’ products in their day to day work, not just SQL Server. Of course the ‘SS’ in PASS stands for SQL Server,  hence the main reason for meeting as a PASS chapter, but with such diverse experience in the group, it was decided that while the main topic of discussion would revolve around SQL Server, the focus of the group would be the professional development of its members. It’s about the people, baby!


After about a half hour of discussion, it came time for me to present. I did the same presentation I did at SQL Saturday, Trending Reports in SSRS, but this time I had the benefit of attendee feedback from the first time, and I had changed things up a bit this time, spending less time on the descriptions of my company and how we do things, and more time on how the reports were built in BIDS. I ran into a couple glitches when I got to the point of deploying the report that I still don’t understand, but I blame it on the laptop and configuration issues. A laptop is not a server, for sure!

There were some great questions throughout the presentation, and lots of interaction. Some of the folks present had not played in SSRS, and yet were writing reports every day. I got to show off some of SSRS’ features, including ways to automate reports, delivering them in emails and to file shares, and in various formats to either, for example as Excel files or pdf files. I am no expert by any means, but I have played with all of these features, and was able to share knowledge with others who had not.


Presenting, especially to a smart group of people, is a two way street. Through both people’s questions, and through discussions of others’ use cases, I learned quite a bit about the different ways people were reporting against their SQL instances. I also learned quite a lot about how businesses outside my own operate and how SQL Server is used in those businesses.

All in all, it was a very enjoyable experience. It is my sincere hope that more people come out and join us as time goes on. I think that will be the case as word gets out. We’ve got some great speakers lined up for the coming months, both live and through LiveMeeting, which will give us the opportunity to get speakers that we couldn’t otherwise because of travel.

I’d like to thank Ken and Jon for inviting me to speak, and I can’t wait for next month, to be an attendee!

April 28, 2010

Presenting at Columbus, GA #sqlpass Chapter

Posted in SQL Server at 10:00 am by David Taylor

I am privileged to announce that I’ll be presenting for the second time in my life at the Columbus, Georgia Chapter of PASS April 28, at 6 PM Eastern. I’ll be presenting the topic I presented on at #sqlsat32 in Tampa in January, Using SSRS with Trending Reports , with some changes I made after attendee feedback.

I am totally stoked to see this group get back into meeting after a several month hiatus.

The meeting will be held from 6:00 – 7:30 at the Columbus Public Library, Macon Road.

More info about the group can be found at  Columbus GA SQL Server Users Group website, http://columbusga.sqlpass.org/default.aspx

April 27, 2010

SQL Saturday Recap #sqlsat41

Posted in SQL Saturday, SQL Server tagged , , at 10:00 am by David Taylor

I got to enjoy another SQL Saturday this past weekend, and it was as good and in some ways better than any I had attended before. This one, in case you didn’t know, was held in the Atlanta suburb of Alpharetta, GA on April 24th. That makes it only 180 miles from home, the closest one I’ve been able to get to this year!

 Put on by the folks from AtlantaMDF, the local PASS chapter, led by Stuart Ainsworth (Blog | Twitter), who was helped by the likes of Aaron Nelson (Blog | Twitter) and many others whom I don’t know their names (sorry!), this event had speakers that included something like 6 MVPs, and, if I heard correctly, at least two Microsoft employees. Some serious quality in speakers.

 Like some of the recaps I have read so far, let’s do a bad things/good things type of review:

The Bad

 The worst thing for me was that I couldn’t stay for the whole day! I had a prior commitment that forced me to have to leave at 1:30, making me miss the last three sessions of the day! Some have asked me if it was even worth going, given that restriction and I answer with a resounding “Yes!” More on why after the not so good.

 There was only one registration desk, with two people manning it. There were two lines, breaking up the alphabet in two, but the signs with the letter groups on them were written in ball point pen, on the desk itself, impossible to read from far away. To be fair, this ended up not affecting me, as Aaron Nelson, who is a fellow volunteer with me at PASS Application Development Virtual Chapter, came out from behind the table to personally give me my name tag and bag stuffed with papers and such, but for others not with the ‘in’ crowd, I’m sure this was a problem.

 Another shortcoming I found was the building layout and lack of signage within. There were no signs in the parking garage, and even with the help of a stranger who had been in the building before, we had trouble getting to the correct floor to get to the entrance. Once inside, beyond the registration desk, there were elevators around the corner to get upstairs to the event, but no signs telling you to go there. This was not so much for me because Aaron took me under his wing right away, and led me upstairs to where the event was being held. If I had not had that help, I would have had no idea where to go from the desk.

 These were really the only negatives I found, and as I say, only the first one was a real problem for me, anyway, so the day was a win!

 On the good side, starting again with a personal good for me note, even while I was in line before registration I ran into people (the first familiar face I saw was Audrey Hammonds

Audrey Hammonds and me!

Audrey Hammonds and me!

 [Blog | Twitter] who was also speaking at the event) I had met at other SQL Saturdays before, so I was immediately comfortable in my surroundings.

 Going further into good, the food laid out on arrival was excellent. The volunteers were very visible throughout the day, and made the event as smooth and more so as any I had been to.

 I like how the Speaker Evals were handled – volunteers distributed them at the beginning of the session, collected them at the end, and, another good point, there was a raffle at the end of every session with the winner pulled from the collected evals – I’m sure it made the end raffle much smoother and quicker.

 Now, how I spent my SQL Saturday

 I started with Jonathan Kehayias’ (Blog | Twitter) “Auditing User Activity 101.” Let me start by saying that, as I entered the room, the first thing I heard was that Jonathan could not get his demo computer to work with the projector. ‘Uh, oh,’ I thought, but I needn’t have worried. He soldiered on anyway. His presentation computer worked fine with the projector, so he could present on the subject without problems. He covered auditing in SQL 2000, 2005 and 2008, highlighting what was possible in each version, which I thought was very good. Everyone seems to speak about the latest version, and there are a lot of places still on old versions. There was quite a bit of detailed coverage for each version, too, belying the ‘101’ in the title. While he did cover basics, he also went into deep enough detail to explain the ‘why’ in addition to the ‘how.’ It’s obvious why this man is an MVP – he really knows his stuff!

 When it came to demo, we just gathered around his laptop! Good thing the SQL community is as close as it is 🙂 The demo scripts were also done on a mix of versions, highlighting his points made during presentation. All were very well done, and, at the time of writing, I eagerly await their upload so I can further study them.

 During the break between the first and second sessions, I had one of the brightest spots in the event, for me – I got to meet, in person, an actual SQL community celebrity, the fairer half of the @MidnightDBA couple, Jen McCown!

Jen McCown and me!

Jen McCown and me!

(Blog | Twitter). I was thrilled! Made the whole trip worth it, even if nothing else had happened that day 🙂

 My second session was Whitney Weaver’s (Blog | Twitter) “Solving Real World Problems with DMVs.”  This was a very interesting session. I loved Whitney’s laid back attitude, and his in-depth knowledge of the subject. As an aside, Glenn Alan Berry (Blog | Twitter) is doing a blog series this month called “A DMV a Day,” which I have been following closely. This session was similar in that it was covering DMVs, but Whitney’s style and take on the subject are different. Having a different approach seemed to help solidify the concepts in my head.

 This was followed by Kevin Boles’ (Twitter) “Advanced T-SQL Solutions.” I’m finding myself gravitating towards more advanced subjects as I learn more of SQL Server, and this session was right at the “advanced enough to make my brain feel full” level! This man showed some tricks with SQL that I’d never thought of before, making for another set of scripts I eagerly await to be able to download and study.

 Lunch, which came next chronologically, was a box affair consisting of choices of sandwich, chips and a cookie, which was actually very good. Better, I got to schmooze and network with more people I have come to know over the past year, among them, the lead organizer Stuart Ainsworth, Julie Smith (Blog | Twitter and newly named Twitter), and several others, already named.

 The last session I had time to make was Troy Gallant’s (Blog | Twitter) “Introduction to Transactional Replication.” I know, I just got through saying I was learning more advanced stuff, and this was an intro level talk, but I have not been exposed to this subject before. Troy handled it with aplomb, sticking to the 100 level, letting the attendees know where the pitfalls were, and where to use the various approaches. A very enjoyable end to the day.

 On the way out, I got to spend a little while longer networking and saying goodbye, seeing folks I hadn’t seen yet in the day that I wanted to, such as Robert Cain (Blog | Twitter), who actually remembered me (!) and Kendall Van Dyke (Blog | Twitter) who greeted me like an old friend, even though we had only met once before, at SQL Saturday #33 in Charlotte.

 A great, fulfilling day, overall, I am so glad I was able to go, and I look forward to the next SQL Saturday I can attend, #38, in Jacksonville, in two weeks!

April 22, 2010

SQL Saturday #33 Recap, long after the fact

Posted in SQL Server at 3:42 pm by David Taylor

Now, let’s discuss SQL Saturday #33, held in Charlotte, NC on March 6, 2010.

This would be my third SQL Saturday. I’m becoming a SQL Saturday junkie! This one was special, though, for several reasons. First, it was the SQL Saturday in which the SQL Saturday name, brand, concept, etc. was passed over to the Professional Association for SQL Server (PASS). Second, it was the first SQL Saturday that was within travel possibility for me that had a plethora of MVPs speaking. Third, it was the first SQL Saturday I attended after becoming Volunteer Coordinator for PASS’ Application Development Virtual Chapter, and Blythe Morrow (Twitter), PASS’ Community Coordinator, whom I would be volunteering for within PASS, was in attendance, which to me was as big a deal as the collection of MVPs!

I live about 400 miles from Charlotte, so when I say the event is within travel distance, I mean only just, as far as driving. (Flying to attend a SQL Saturday event is probably not in the cards for me, at least not while I am at my current job.) I took Friday off and drove the seven hours to Charlotte, arriving late enough that I missed meeting up with anyone the night before the event. Nicely, SQL Saturday had hooked up with the Hyatt Place Hotel, which was actually a decent hotel, at a discounted price. I won’t bore you with the details, suffice to say I had a nice dinner, and a good rest before getting over to the Microsoft facilities for the main event the next morning.

Microsoft has a beautiful campus in Charlotte, a spacious parking garage, and we had great weather for the event, which was a good thing because the sessions were spread between two buildings. The event was hosted by the Charlotte SQL Server Users Group, and that group did themselves proud! There was an incredible breakfast laid out, registration was pain free, and there was a general feeling of excitement in the air before the keynote.

That was awesome, which I know is a strange thing to say about a keynote, but this one was special. It was a keynote speech introduced by Peter Shire (Twitter), followed by Steve Jones (Twitter) talking about the origins of SQL Saturday, then Andy Warren talked more about their goals and where they hoped it would go under PASS. Finally, Rushabh Mehta, PASS’ president, spoke about how PASS’ main, immediate goal was to keep SQL Saturday as good as it has been, and then make it better. At the end of the keynote, Steve and Andy handed Rushabh a huge (about 3′ long foam) SQL Saturday key!

Sometime between arriving and the first session, I introduced myself to Blythe Morrow, who I got to spend some more time with later in the afternoon.

For my first session choice (which is the worst part of these things, you can’t be in every session!) I went with “As a DBA, Where do I start?” presented by Mike Walsh (Blog | Twitter). This was a very interactive session, Mike really engaged the group. He focused on what he calls the ‘itties’ of being a DBA, Recoverability, Availability – Health & Performance – , Security and Reliability, and he covered them well. I’d been wanting to meet Mike for a long time, he is a fellow New Englander, and one of the great people (of which there are many) on Twitter.

Next up for me was Denny Cherry’s (Blog | Twitter) “Is Virtualization a good choice for SQL Server?” This was an informative session for me, as I have just about zero experience with virtualization, and Denny’s the man to teach it. He actually hooked up to his work environment live from the event to make his points, and he has one hell of a set up! I really learned a lot in this session!

I then went to Aaron Nelson’s (Blog | Twitter) “Powershell for Database Professionals” Aaron is on the same team as me in PASS’ Application Development Virtual Chapter, and I had seen him do this presentation in a Live Meeting session virtually, actually presenting for the first time and I wanted to see this presentation live. He had added demos to the pres and polished things up nicely, a really good job.

Lunch was, as you could imagine in a place with 300 people all eating at once, was a long wait. Nice opportunity for networking, though.

Kendal Van Dyke (Blog | Twitter) “The (Solid) State of Drive Technology” was the after lunch treat. SSDs are fast, really fast. To demonstrate, he actually shut off the laptop he was demonstrating with, and then cold started it, one minute, twenty seconds to having the presentation on screen again! Impressive as all get out. In addition, the information he gave about the different options and specs for SSDs was nearly as impressive.

Denny Cherry’s “Storage for the DBA” was next (can you sense a theme here?) and this one was where my brain started to hurt. Most real DBAs would laugh at the setup I work with over here for SQL Server, so I have zero experience with SAN storage, or even RAID stuff, so this session went *just* over my head, which is a good thing as it gave me impetus to go and look things up after I got home. I have to apologize to Denny, though, I walked out just past the midpoint of his session, though I had a good reason.

People who follow me on Twitter know I am on it on my Blackberry as often as I can be, and this event was no exception. So during Denny’s session, Blythe Morrow and I were tweeting back and forth and she invited me to come join her in the lunchroom where she was hanging out with the likes of Steve Jones, Andy Warren, Aaron Nelson, and Rushabh Mehta, which was quite an honor for me.

My last session of the day was Kevin E Kline’s (Blog | Twitter) “Top 10 Mistakes on SQL Server.” If you haven’t seen this guy present, you need to! His style is awesome, his knowledge phenomenal. Really. What a way to end the day!

But wait, there’s more! At the end, there was the Prize Drawing. Enough has been written about this that I won’t rehash it. There were some awesome prizes, though.

Now, at this point, my brain was mush, and so it must have been time for the After Party! This was held at a place called Mac’s Speed Shop. I was told before I left that I had to try NC BBQ, and I was told while I was there that Mac’s was the place to try it, so that was win-win. Quite a few from the event were there, some great conversations and good food capped off the day nicely. 

And then… The Long Drive Home. I should mention here that I have a bad habit of tweet-driving, and the road home was no different. I mention it because the first step is admitting you have a problem, and there’s another SQL Saturday this weekend. I wanted to point out before I go that I know I have a problem, but I don’t care 🙂

Thanks for reading!

Previous page · Next page