Database Expert

Yes or No?

Within intelligence and justice organizations there’s the discussion on whether I would be an ‘expert on databases’ or not. This is a contested question and is directly related to my database findings at ZorgNed. For these indisputably indicate the presence of a Satanic school of thought at ZorgNed. These findings are even separate from the identification of Mariël and Gerda as Satanist cabbalists (Jewish kabbalah). These identifications stand alone and are highlighted in the article which describes my findings at ZorgNed. Those who attempt to undermine my expertise can be identified as sympathizers of, or mental members of this proven cabal.

The objection comes from gate-keepers and cabbalists which vehemently claim I am no expert on databases and instead that I would be a mere schizophrenic with image so that through these means they can nullify the notion of presence of Satanic school of thought. This objection they purport on the basis of a statement made by Joord de Vreede, an employee at C.E. Repair where I worked in 2016. He was an immediate superior.

The statement goes that I would not be an expert on database ‘because I used primary keys with GUIDs’. And one doesn’t do that if one wants to set up a database schema for key performance. This they try to maintain as a benchmark to undermine with my expertise. In this regard, I will now tell about what is reality. First off, I will tell a bit about primary keys. Then I present a factual list showing my expertise. This will be followed by an explanation on the Gotten from Git Hub! allegation with regards to my own written frameworks and finally I will present a short profile of this Joord and his acting with regards to databases.

Primary Keys 

Database data is often added to tables in the form of of a row. For instance like with a table of customers. Each customer is represented by a row in this table. A table row in jargon is called a ‘record’. Often a database records multiple types of data. In this example this could be a table of orders for a customer.

The case is that software needs to fetch related rows from database tables to work with this set of data. Or for the purpose of displaying this data, or have the user modify this data to then write it back. Thus, records in tables must have a way to be uniquely addressed. For you see, customers can have identical last names.

For this, a long time ago, a principle was thought up: the ‘primary key’. This is a data field added to a table through which rows can always be uniquely identified. This data field will most of the time have a technical value which is generated by the database or the software and will always be a unique value for each row within the table. However, this isn’t always the case. Think of a table of countries. The country code is short and is a unique value beforehand so it can serve as a primary key as well. But in most cases, it will always be a technical value generated by the database or the application.

To get back to the example, both the customers table as well as the orders table will have a field with a unique value across rows; the primary key value. In order to specify which order belongs to which customer, customer data isn’t duplicated and tracked within an order row. Instead, a reference to the customer row in the customer table is recorded. Thus, an order row will in that case have a field with the value of the primary key of a customer the order belongs to. This referencing field is a referencing key, or in jargon, a ‘foreign key’. This way, customer data needs to be recorded only once, in the customers table.

Now comes the gist of Joord’s statement. Primary keys can have differing formats. It can be a sequential number. But it can also be a complex, though unique value like the GUID, which means ‘Globally Unique IDentifier’ which in turn means that it is a globally unique value. GUID values are nearly guaranteed to be unique regardless on which computer system in the world it has been generated. Or, it is a readable series of characters as long as they are unique like it is the case with country codes. Thus one can specify for a table which data type the primary key will have to be.

Integers

An integer is a whole number in computing terms. They take up relatively few space and have a default range of a little over 2 billion unique values if one uses positive numbers only. Primary keys of the integer data type will often begin counting at 1. Then the next row will get number 2 and so on. One can choose to have the database generate the new integer sequence number or to have the application do it. Both have advantages and disadvantages. However, the main reason to use integer values as the primary key is performance. Because it is an increasing value, and as such is sorted by beforehand, the database can add a row or retrieve a row for a given key value very quickly. The use of integers is preferred with tables which may contain a huge amount of rows while having a low ‘turn over’ ration which means that relatively few rows will be deleted again where data persists for a longer term. A huge expected amount of rows already suggests that performance is going to be an important reason for choosing integer keys. Almost always, integer will be used as the default data type for primary keys of tables.

GUIDs

GUIDs like explained before, can also be generated by the database or by the application. These numbers are guaranteed to be unique not only within a table, but also within the database. And outside of the database. No matter where on the world. This is exactly the key property of this data type, the guaranteed uniqueness anywhere on the world. The range of unique values is bigger than the number of atoms in the visible universe they once determined.

This can be interesting when table rows are exported and imported a lot between databases or data files. The data rows will always remain unique and will never have to undergo a translation to their primary keys. Also, a GUID is useful for tables that have a sizable ‘turn over’ ration because uniqueness is guaranteed where the issue of ‘holes’ in sequences through deletion is simply not an issue.

A renown product that uses GUIDs as primary keys for tracking rows in tables is the SiteFinity by Telerik product. This is a CMS, a content management platform. CMS systems can have a huge turn over ration as old content revisions get deleted while being replaced by new content revisions.

The Argument

When I worked at C.E. Repair for just a short time, Joord asked me to create two small tables. A table for students and a table for grades of these students. The goal was to use this as a test to convert relational data to the XML format. The logic for the conversion is what he wanted to use for the actual project that was being worked on.

For these two tables I used GUID primary keys. In this scenario, performance is no issue; it was about conversion to XML from a small dataset. I used the type for convenience reasons because the generated keys would be known beforehand and thus could be used directly with related rows to specify the relationship between the grades and students. Joord in no shape or form suggested that this scenario had to be set up for performance. These two tables were not part of the actual project anyways.

Now this is used to claim I would not be an expert on database because of this choice in this scenario. Other than that they have no substantiation at all. If one would have asked me to create tables for performance or large datasets, then I would have used 32-bit integer primary keys without a second thought.

The Facts

The facts I highlight here are authentic and are historic reality as reality transpired.

  • In 1998 I developed for Central Service Group a stock maintenance application for the maintenance of the Fujitsu parts warehouse. I called it StockWizard. The application facilitated everything surrounding the logistics process of parts handling between the stock and repair engineers as well as leveling and ordering. It was written with Borland Delphi. This application was based on the Paradox database which was controlled by the Borland Database Engine (BDE). Compared to present standards this is an old fashioned database and it didn’t have means to ensure that referencing keys actually existed as rows in the target table being referenced. This is something the application had to take care of itself. I provided each table with an integer primary key with obviously performance as the main reason. And the application maintained the referential integrity itself.

  • In 1998 – 1999 for the company Falcon Automatisering, I created components to access the database they used ( Sybase SQL Anywhere 5 ). These components had a much better performance and were significantly more stable than the components that came with the ‘Visual Objects’ development environment they used. The instability and the user unfriendliness of these out of the box components were the main reasons to create brand new components from the ground up. They were component wrappers around the ODBC API that implemented a Delphi way of thinking for working with queries and tables.

  • In 1999 – 2003 for the company Kooijman I created a component which could generate the SQL queries based on a visual configuration for these for the database they used, named Interbase. Also the visual query editor I created for this component. The visual query editor also enabled testing queries against a test database. The names for the related components are: TkvSQLDef, TkvQuery, TkvProvider, TkvDataSource, TkvClientDataSet. Both Erwin Meijers and Leonard Stok can attest to this. I don’t think they would lie about this… right?

  • In 2006 for the company Kraan Bouwcomputing I had to manually optimize Interbase database queries in order to work with tables of 100.000 to 500.000 article records at once. The database refused to use a specially added database index for the involved table and had to be instructed to use this index with the specific query. It reduced query times from minutes to a fractions of a second. In jargon: For the SQL query I had to explicitly specify the access plan to have it use the index.

  • In 2006 at the company Kraan Bouwcomputing I informed Leonard Stok, my manager there, that if the new software line for the .NET platform wasn’t going to use integer primary keys, I would resign. The company used composite primary keys (primary keys consisting of multiple fields to obtain uniqueness) for the current software which I wanted to get rid of. Mister Stok didn’t want this. I wanted me to set up the database using GUIDs, without any sort of key translation requirement that comes with integer primary keys. He regarded this too complex for the developers working there. I eventually settled with this.

  • In 2007 for the company Kraan Bouwcomputing I created a component set for the Microsoft .NET environment in order to access the Interbase database from within .NET. At that time there was no 100% compatible implementation for this database available.  In jargon: For Interbase I created a custom ADO.NET implementation. The components are: InterbaseConnection, InterbaseCommand, InterbaseConnectionStringBuilder, InterbaseDataReader. This was a success. These components could then be used with the DevExpress product XPO which enabled the use Interbase within the DevExpress XAF environment which was used to create with the software.

  • In 2008 I made for my World of Warcraft guild called Serenity a guild website which had an integrated scorebord web application (DKP points). The database behind this application uses integer primary keys.

  • From 2014 to 20215 I have developed a database access layer for the company DataQuint. It was based on my SQL model IP. The head-developer wanted to use DevArt components at first. After examining this package I deemed it to be excessive work with the method DevArt uses. I told them I could build a system dat would require no code changes when changing database back- ends and demonstrated a preliminary version that I made in my own time. The head-developer was okay with it as long as I would work together with and inform the other two developers about it; explaining its workings and so on. Throughout these two years it became a solid basis for uniform database access to the following databases: Oracle 11g, PostGreSQL 9, SQLite and MS-Access.  The SQL model I had developed in my own time. The supporting ADO.NET components I wrote during office hours. Additions to the SQL model were done in office time as well. Nobody there provided input on the design of this components set. It’s a system that I had already worked out as personal IP. Later on in my own time I began to build a new private, personal version that would become the basis of my current database and SQL frameworks as described below. The version I made for DataQuint fully replaced some very basic code that the head-developer called his ‘GDP’ – Generic Data Provider. The replaced code by far was not capable of handling database access across all these databases in a dynamic manner. The guy was adamant that the database layer I wrote was now called ‘GDP’. Likely to create the perception that he was responsible for the design and or IP. For the entire class set (Both the ADO components and the SQL model) I made UML diagrams. Later on head-developer loved to brag to customers and partners with this diagram (and others I made with regards to software architecture) while I was never involved with design sessions with these customers. Head-developer actively shielded me from them in order to create a perception where he was responsible for those diagrams and the designs conveyed while avoiding scrutiny of his skills by these customers as they would realize I would be fully responsible for these designs and architectures and not him. Basically, he tried to avoid being discovered for whom he was – bragging with other’s work while reality would show he’s not very knowledgeable and skilled. I’ve got whole stories on this.

  • In 2015 I started developing a components framework for the .NET framework which enables database access through a universal model for any database that supports the SQL language. The name for it is a word play to the ADO.NET framework and is called UDO.NET which stands for Universal Database ObjectsIn jargon: As part of this components library I developed an object oriented model surrounding the SQL database language with which database queries can be formulated in a consistent unified fashion. If one would migrate to a different database, it would not demand any change to application code. For every database speaks its own dialect of the SQL language and those differences become bigger when it concerns specific, high level functionality for a given database brand that sports its own implementation of this functionality. Also this component library is a custom implementation based off the ADO.NET component base classes.

  • In 2016 I started development of a framework that translates between object models and database commands. This way one no langer has to work with database commands but instead can work with data in an object oriented manner. In jargon: The development of a custom entity / business-objects framework that in turn uses the UDO components and the SQL object model. Database compatibility added to the UDO layer will automatically propagate to this entity framework. Developers can choose the data type of the primary key for each table individually.

  • In 2016 for the company C.E. Repair I created a parts ordering prognosis algorithm which is called in jargon a ‘4th degree polynomial’. It was fully written in SQL. Even the arithmetic matrix calculations for this mathematical function over the order history for parts was written in SQL.

  • Since 2000 I also work with database meta data. This is database data which describes the tables, fields, indexes and key definitions. Joord classified mastering meta data as expert level knowledge.

I am willing to testify the above statements about my knowledge and accomplishments under oath. Furthermore, I am willing to demonstrate and document my source code for the purpose of affirming my knowledge under the watchful eye of legal database experts, .NET experts and justice officials any time. This also includes a possible source code audit.

Git Hub 

One of the sheer lies of cabbalists and sympathizers within intelligence is the claim that my private software projects would not be my own, personal developments and would instead be taken from the source code sharing site Git Hub. This lie would have to complement the schizophrenic with image allegation in order to undermine my actual software expertise. Reality is that I don’t even have a Git Hub account. What I do have is a Bit Bucket account with Atlassian with the two-step authentication enabled. This is where I keep my source code and the storage is configured as a private repository that does not allow external access.

Also, a history is kept of all the work update uploads. Each upload is called a ‘commit’. These are incremental updates of a continuous development cycle of my projects. The next two images show the commit history for the UDO project and for the entities framework project. With this commit history the full development cycle can be followed step by step across the time line. Also, for both projects I opened the cloud settings. They show the configuration for these repositories. All my source code is private property and authentic private development. No line of source code comes from any other source like Git Hub or some other website. What ever has been uploaded to Git Hub is a stolen copy of my work that has been uploaded by cabbalists to fabricate this ‘Taken from Git Hub’ illusion .

Commit log UDO framework

Commit log entity framework

Profile

Joord wasn’t a such a great guy socially. He had an appearance of arrogance and tried with this attitude to marginalize me and my work. This arrogance wasn’t aimed at just me. Also other employers, repair engineers had to suffer from his arrogance. Especially when they were not around. The fun part is that he wasn’t always correct with his claims. I have had to correct him multiple times in the field of .NET and database meta data. He didn’t like that.

He also removed from the C.E. Repair production database the ‘composite indexes’ with the argument ‘The database optimizer will create them as needed.’. That’s true partially. However, these indexes won’t exist for ever given that these are a temporary creation and depending on resource use by the database and queries ran these may be discarded again. After which it has to be created again. This may lead to momentary delays when certain data hasn’t been accessed for given time span. These ‘composite indexes’ have to be present permanently, especially for tables with a lot of rows.

Joord also berated the 4th degree polynomial extrapolation. However the implementation could not be optimized or formulated differently. His suggestion of using ‘Recursive Common Table Expressions’ was not possible for the matrix calculations. There are many more issues I could tell about with regards to this guy.

He also wanted to have me search for an illegal license key for the Microsoft SQL Server Datacenter product which has a price tag of several thousands of euros. I refused this profusely out of principal reasons. The other colleague, Ton, was a witness to his request given that he watched along with Joord’s desktop which was switched to the large display in the room, noting my reply to Joord’s request.

He also had a derogatory attitude towards female employees. In the software to be built which would also be used by the female employees of C.E. Repair, he embedded into every web page a translucent water mark of a female suggestive figurine, laying on the back with one leg over the other. I pointed out this fact to him. Then he followed up with a remark that the female colleagues offended by this would just have to ‘grab a razor blade’. However, the next day the watermark had changed into a non-suggestive ballet figurine. And furthermore, he obviously was a fan of the Lucifer series that ran on the U.S. Netflix at the time. And, this guy also exhibited occult signatures.

Closing

The forensic analysis I conducted on the ZorgNed database is based on over 20 years of experience of working with databases of various brands as well as a thorough knowledge of data and data types in general through which I was able to conclude that the relevant primary key value of the particular database record was a by humans manually defined key.