If your database has export capabilities, use it. Now!

It should be to no one’s surprise that ETL is a process that should go out the door. If you read my two prior posts, you will see how newer databases that employ export functionality provide far better ways to capture data and send it data warehouses.

The difference in data quality is stark.  Through an ETL process for capturing data and loading it into databases, you have to work through several sources, some of which you may never have the data you need. Sometimes it feels like you are a magician for making data appear.  Then you have the export process which sends all data, that you choose, in as raw as a form as possible for the data gurus to play around with it an mold it into terrific stories.

Continue reading…

Extracting, Transforming, Loading data into a warehouse database

The ETL process is a common practice in nearly all companies that have data that want to analyse or repurpose. Often are times that products cannot be maintained due to resource constraints, that the developers working on the product aren’t aware of the goals from those working with warehouse data, or the cost to update the product is too high to replace ETL with a far better process as I described in my last post.

I’m not in any way going to say that ETL is bad, because unfortunately for many, that process is required and there is no other way to get the data.  Some of my greatest data challenges came out of ETL processes earlier on before developing a full platform capable of capturing and sending the data to the warehouse in a straightforward and easy to use format.

Continue reading…

Sending transactional data to warehouse databases as it happens

Data warehouses work best by storing data by transaction.  Storing individual transactions to a data warehouse, as seen in my last post, allows the data to be used in many different ways and in many cases allowing it to be future proof. One of the greater challenges I’ve come across and sure many have come across is finding the best and most efficient way of storing transactions.

Whether you are looking to build an end to end analytics solution with complete adhoc capability or an engine that can make contextual recommendations based on activity, the data stored in the warehouse will be very similar. Then comes the question of finding a way to best store that data without slowing requests or adding strain to servers.  This is accomplished through various means including asynchronously pushing data to your warehouse, appending a csv file to import into the warehouse database, or if your database supports it to export data when it is most efficient.

Many will perform this process through more complicated methods known as Extract, Transform, and Load or ETL for short. Many have their reasons to prefer ETL over gathering data from transactions as they happen. In my experience, it leads to a lot less flexibility as the data that is being extracted for warehouse purposes may be too limited.

Continue reading…

Building a data warehouse

The databases of today are in many cases built for specific purposes.  Some of the more common ones we see every day are relational databases, document-oriented databases, Operational databases, Triplestore, and Column-oriented databases / c-store. Typically relational, document-oriented, operational, and triplestore databases are used to solve frontend database problems.  Then you have databases that are column-oriented or similar that focus on solving warehousing and backend database problems.  These products don’t need to solve those problems, though they are often best suited for them.

Continue reading…

The challenges of scaling your data vertically

There are many reasons for which databases must be scaled.  The majority of the time it must be scaled to accommodate for performance issues as the product grows.  Though NoSQL is making a lot of noise these days, it is to no one’s surprise that SQL is still extremely popular.  In general the same principles are followed while scaling out any SQL product, be it MySQL, MsSQL, Oracle or even DB2. Scaling is often done to overcome performance issues as the product grows. However, dealing with big data scaling is often done to balance the data across multiple hardware nodes or clusters.

Continue reading…

Starting a new series. The BIG data series!

Over the next few weeks I will be starting up a new series which I am hoping will have at least a dozen posts.  I am looking to cover a lot of the high level concepts around big data.  Scaling, data format, software, and using the data will be amongst the topics I will cover.

It is no surprise that I am looking for a job. Over the past few months I’ve had many interviews and have noticed that all companies had similar problems. BIG DATA. Most of these companies had issues scaling their databases for either performance or storage reasons.  Many others were simply unable to pull and use their data efficiently because it would take seconds to get results from a fairly simple query.

I’ll be looking to keep the posts bit sized as this one will be. I look forward to getting feedback or suggestions on topics related to Big Data that I should cover.

Stay tuned, the first post coming shortly will be on scaling.