How to build an un-hackable password

Okay, another friend got hacked yesterday – here’s how to build an un-hackable password:

1. Pick a favorite date, like Bastille Day, your dog’s birthday, the moon landing or something.  You could also do something like a favorite movie mixed with it’s author’s name.

2. Reverse it, so for example today would be 8102voN82.  You could do the film thing like “Clarke1002Arthur”.

3. Pick two letters from the site you’re visiting or app you’re using, best is the leading letters of the first two syllables, so FaceBook would be FB.

4. Pick two numbers and use the Shift key to make them special chars, so 3-4 would be #$

5. String all these together to make your password, so my example here would be Clarke1002ArthurFB#$. Or add them together the other way to make #$FBClarke1002Arthur.

You can re-use this algorithm anywhere, it’ll give you a unique password for any site or app, and you only have to remember the pattern you use to build the password. It should take an average PC about 3 million years to crack it brute-force style.

Posted in IT, PC Stuff, Software, Tips, Work | 3 Comments

Leadership of IT/Software Teams in a Regulatory Environment

Part 3 – now let’s introduce regulation

First part of this, is let’s define “regulation” in context.  Specifically, I’m referring to ISO 13485, EU medical devices (GMP, or “Good Manufacturing Practice”), and FDA medical device standards.

Let’s also be clear, these aren’t just “guidelines” (insert Barbarossa quote here), they’re law.

As law, what is their purpose?  The goal of this regulation is to make certain that the products made by manufacturers are safe, effective, consistent and unadulterated.  As a result, this means that they end up being implemented in such a way as to minimize mistakes and errors, and reduce or eliminate contamination where such can occur.  Consumers as the beneficiaries of the regulation are guaranteed a high level of security that the products and devices they are exposed to do the job they are supposed to in a way that is not dangerous to them.

It is also worth noting here that as law, violations can result in steep fines, product recalls, and potentially even jail time.  This is serious stuff.

That means you, as a leader, have the responsibility to implement compliance measures, and to ensure that your IT and Software groups understand and follow the measures you put in.

The requirements of regulation can result in direct opposition to some of the activities your group performs.  Quite often a software group will be operating on an “Agile” methodology, with possibly scrum sprints and so forth – and these methods are oriented towards speed.

Regulatory requirements will impair that speed and that slowdown will appear to be simply “getting in the way” of work.  Persons unfamiliar with the need for regulation will see it as unduly burdensome – indeed, I can quote a peer who totally missed the mark of what those regs were for:  “No one cares anyway, these are just here to make it look good.”

Needless to say, he didn’t last long.  Sadly, many developers – including most who come directly from university – have no experience with such regulation and will view them as unnecessary impediments.  I myself, as a software developer many years ago, left a contract that I felt overly constraining to me because it was operating under FDA medical devices regulation, and I didn’t appreciate why it took six months to get from proposing a spelling change to final deployment of the release containing the correction.

These requirements make progress feel very plodding and restrictive, particularly for those junior members of your team.  And they never end.  They are part of the job, every single day.  It isn’t just something you do once, qualify for, and then amble merrily forward – these actions and activities go into the job and become part of the fabric of how you operate.

What can good leadership do in such a conflict?  What is to be done here, and how can we exert good leadership in this environment?

Let’s examine the conflict first.  There are a few sources of conflict here:

  • Requirements for thorough examination of work products and the preparation for creating them can seem like an attack on the respect for an IT professional’s work.
  • The burdensome nature of documentation and preparation in advance of work can create a very tedious work environment, making it hard to feel like you want to go in.
  • Effects of both of the above and other impacts of regulation can fray the nerves a bit, and place demands on your mediation skills that you didn’t expect.

Needless to say, rising to the occasion here requires a lot of patience and discipline.  It also needs a great communicator, which I’ll get to in a moment.

Let’s begin with organization.

A lot of what you do, whether it is network infrastructure or software development, will require up-front planning – and most importantly, documentation of that planning, in order to provide an audit trail.  Whether you’re aiming to be compliant with ISO, GMP, or FDA, there’s a key question that you have to ask before you begin an operation:

Will my action, or any effect of my action, have direct impact on our final product?

For some things, the answer will of course be “no” – for example, establishing a new backup plan for your email server.  Regardless, before you begin, that question has to be asked and the answer documented.  For most regulation, that documentation ends when the answer is “no.”

However, what if the answer is “yes”?  In that case, you have to assess what you are doing, why you are doing it, why you think doing it will satisfy the original stated need, what are the risks, plan mitigation for those risks, and have your rollout staged to assess success or fail conditions at every milestone.  This entire process is generally called “validation”.

In my own context, I created a form that enabled myself and others on my team to ask that question, and then to lay out the long set of considerations on changes to hardware and software in the IT group.  That form went into our Confluence server, and could be linked from there to Jira tickets created to represent the progress of the tasks being tracked.  For your own use, in whichever issue-tracking system you use (I refer to Jira here, just because it’s so common and it’s the one I’m most familiar with), you can put a field in your issues asking the very question about impacting the final product.  Then you link it to a new issue page with the content required for the “yes” answer.

And once filled out, the page may need to be locked in read-only status (GMP and FDA requirements both demand that no editing or deletion be available after finalization).

This satisfies the regulatory requirement, and the organization provides your team with clear addenda to the original requirements.

The need for organization (and communication, below) requires your Commitment.  Not just commitment to the team, but commitment to the mission of the company and the production of whatever products require the regulatory oversight.  Your IT / development group are key players in delivery of solid product to market.  If the commitment isn’t there, the team will know and it’ll show in how you deliver.

Next step we should look at is communication.

Quite possibly, this should be your first consideration, but I wrote it second here and we’ll leave it at that.

In your team, from day one of a person’s start or day one of implementing compliance, communication of the compliance effort will be key to making sure everyone stays on board with it.

I’ve found in both software development and network architecture that the most important factor in keeping the team aligned is making sure everyone knows and is clear about the answer to the question: “Why are we doing this?”  Knowing why gives us all not only a common ground and a team unifier, it also helps us all determine potentially better solutions than we could with just a team head knowing the why and issuing directives to meet it.

This also applies to ensuring the team gets on board, and stays on board, with compliance efforts.  They have to see the broadest picture of how the regulations help the business.  If your company manufactures widgets used in surgeries, your team needs to be reminded (perhaps even daily) that what you do helps people safely undergo and survive life-saving operations.  Their actions, every one of them, can potentially impact how well a widget works after manufacture.

In a lot of ways, this means that what you’re doing is linking the following of the regulation with the provision of a quality product, and instilling a culture of quality that goes into the tiniest details of everything your team does.

Which, when you think about it, is generally needed for a company to become great, isn’t it?

I make it sound easy, but it isn’t.  You have to beat this drum every day, and you yourself, as the leader, need to be its biggest evangelist.  Your Personality and Knowledge (see Part 1) tie in here, and are key factors in enabling this – you have to be able to be positive about the requirement of regulation and knowledgeable about its execution.  There will be days when you are tired, and simply don’t want to deal with it.  But remember – those widgets depend on you.  They depend on your team.  And the people undergoing surgery depend on you all.

Finally let’s talk expectations

I don’t’ want to really call this a “final” topic, because there’s an enormous quantity of factors that can affect this topic.  But I only have so many hours of the day, and I’m calling these my top three items for being a successfully leader in this environment.

Setting expectations of stakeholders and team members for the execution of projects and tasks is a key element of all work, whether it’s regulated or not.  Telling your boss how long project Y will take, and getting good estimates from your staff on how long tasks A, B, and C will take are key to that.  Regulations increase workload, there’s no two ways about that.  They also slow down progress.  But they do enhance quality.  They benefit consistency in the product(s) your company makes.  Knowing what features are being prepared and what to expect in each release, as well as knowing what steps are being taken to mitigate the risks involved will put everyone more at ease (and will ensure no interruption or disruption in production).

Linking the goals of the regulation with the production of a quality product falls directly into your skills of Motivation for your team.  Getting the team’s buy-in by involving them in the setting of proper expectations is the way to ensure the best possible motivation for success.

Making sure your team knows to take into account the added burden of the regulatory requirements will get you a good ways towards ensuring that you don’t have overblown demands on your team.  It also involves them all along the way to ensure they retain respect and participate in their own work environment.  This really applies to all environments, but it deserves special attention in a situation where a great deal of up-front and ongoing efforts require such detail.

Is there some magic bullet here?

Well, no.  Obviously there’s no “silver bullet” answer to anything in IT, but there are some very cool tools that can help you along the way.  I’ve mentioned Jira and Confluence already, and these are insanely useful in establishing organization, team communication, and helping to set expectations.  Really if I were setting up practically any environment, these tools would be first on my list.

There are also document management systems which enable GMP/FDA-compliant protection of docs, which might be required.  When these enter the picture, I generally advise that one keeps only what is absolutely necessary in such a DMS, and the rest in Confluence.

Additionally, Jira or other issue-management systems can be tailored to monitor risks and mitigation efforts, as well as validation efforts.  The reporting capabilities of these systems can make scrum meetings much simpler, as well as providing outgoing communication to non-IT stakeholders in the form of expected- versus delivered-work estimates, etc.

In Summary

Gathering this up, a regulatory environment heightens the need for a clear communication path as well as requiring a more organized IT department.  The company’s, and really the market’s, expectations of your firm’s output puts an additional burden of caution on your IT staff.  This may not be suitable for all tech people, and there’s no shame in recognizing that you might not be one of those people for whom this is a good working environment.

If you’re comfortable with it, though, you’ll find that your skills in communication and organization are being called upon considerably more strongly than in a “normal” non-regulated situation.  You’ll still be responsible for aiding in motivation, commitment, and the rest, but GMP and FDA regs put you in a special situation where you have to be stronger in certain areas to enable your team to succeed.

Leadership in its raw form doesn’t change – but different aspects of it are called upon more strongly in a regulated environment, and you need to be prepared to engage appropriately.

Posted in Business, Business, Development, IT, Leadership, Programming, Software, Work | Tagged , , , , | Leave a comment

Let’s Talk Briefly about Deep Learning

You’ve heard the name over and over, and for most of you it probably settles into the same category as Harry Potter’s “levitation charm” as far as whether you need to understand it.  That’s cool, most people will never need to know this stuff, in the same fashion as you don’t need to know the specific chemical reactions that go on when gasoline burns inside your engine cylinders.  You just want to turn the key and go!

When the term “Deep Learning” started getting used, the media clamped onto it because it was a pretty sexy marketing term, and it sold a lot of print and eyeballs.  This is because we have a somewhat intuitive sense of what we conceive “Deep Learning” to be – it conjures up images of professors doing serious research and coming up with great discoveries, etc.  Because of this, a lot of intrinsic value is being assigned to the topic of Deep Learning, so much so that it seems a startup can automatically generate a B-round or further simply by stating that “we use Deep Learning to make the best decisions on what goes into today’s lunch soup”.  That’s a bit of a stretch, but not by far ).

So What Is It?

We need to start by looking at the name:  Deep Learning.  These two words, just like pretty much everything in computerese, have very specific meanings which often don’t attach well to the real world.  In this case, they do, which is fortunate for me because it’ll help keep this article brief.

Let’s tackle each word separately.

Learning.  A DL system quite often has a multitude of modules or units, each of which is responsible for Learning something.  If it’s a good object-oriented model, each unit is responsible for learning one thing, and one thing only, and becoming very good at that one thing.  It will have inputs, and it will have outputs, and what it does internally it might involve multiple stages of analysis against the inputs that help it craft an output that has “meaning” to the other parts of the system that use it.

What would be a good example of a learning unit?  How about something in image identification?  Within a satellite photo, parts of the image get pulled as possible missiles, or tanks.  These parts have attributes of their parts, such as the ratio of length to width, possibly their height (if the image was taken from an angle not directly overhead it will be possible to calculate height), maybe differences in shading top to bottom (the turret of a tank will produce shadows), etc.

A unit that is given candidate images might build a statistical history based on its prior attempts to classify, its positives and negative results, and may (if it is particularly advanced) also ‘tune’ its results based on things like geographic location and so on.  In its operation, it will come back with a statistical probability that object X which it was given is a tank or a missile.  Over time it will be ‘trained’ – either by itself, or by humans inputting images to it with flags saying “tank” or “no tank” (when humans give it cues like this it is called “supervised learning”), to be better at identifying candidate objects.

Deep.  The ability of a computer system using Deep Learning is generally amplified by how many different things it can learn, and each one of these things that can be learned, when placed in line with one another (or if you go for the visual, they can be “stacked up”) produce depth.  A system that can stack many layers is considered Deep.

The example above, where a system was given a photo object, might be part of a system which takes a single large photograph, and multiple different units act on it.  Let’s walk through the hypothetical layers:

The first might look for straight lines.  It trains itself to be better and better at examining colors of pixels in images and finding ones that line up as straight, given the resolution of the image.

The second looks for corners, where straight lines meet.  It takes the identified straight lines from unit one, and then will seek places where their ends meet.  It will train itself to avoid “T” bits, and might decide that “rounded” corners are acceptable within a certain threshold.  It outputs its data to…

The third, which takes corners and try to find ‘boxes’, places where multiple corners might form an enclosed space.  It must train itself to avoid opposite-facing corners, etc.  It then sends its candidate ‘boxes’ to:

The fourth, which takes boxes and begins to look for color shading gradients which can describe sides, tops, fins, etc.  It compares these values to its own historical knowledge and ‘learns’ to classify these boxes as specific objects – and outputs things like “75% missile” or “80% tank”, etc.  Particularly sophisticated versions might even be able to compare signatures finely enough as to identify the type of missile or tank.

Each of these units described above might be comprised of multiple sub-units, which share connections with one another (which, by the way, is a “neural network” – different discussion, might write about that some time too).  These sub-units would be hidden from the units outside itself.

So between the two, we have a Deep system, that Learns to do its job better over time.

And before you ask, yes, there is also “Shallow Learning” – Shallow simply refers to a “stack” that doesn’t have many layers.  There’s no set boundary between what “Shallow” versus “Deep” is.

How Good Is It?

As with pretty much every computer system ever invented, the answer to the question “How good is it?” is:  GIGO.  Garbage in, garbage out.  The system is only as good as its training.  In the example above, if insufficient valid positive and negative images are given to the system to train on, it can suffer from muddied “perception” and never get better.

However, DL is powerful.  By that, I mean that when compared to a human, it can reach or exceed our capabilities within its specialized tasking in a very short (comparatively) period of time.

For example, I have a set of rules I built in Outlook over the course of probably fifteen years or more, and these rules successfully negate 99.5% of the spam that lands in my inbox (today’s example: 450+ messages are in my ‘Deleted’ folder, about 5 landed in my inbox that had to be deleted).  Occasionally, I get a ‘false positive’ and a good mail will get deleted, but it’s pretty rare.  These rules have taken over a decade to produce, and act on a host of subject triggers, address triggers, body triggers, etc.  A DL system can establish a similar ‘hit rate’ to my rules in only a few days, perhaps as fast as a few hours.

But these factors depend on how well the system is built, and how good its learning data is.

What Does It Mean To Me?

Well, now that’s the catch.  Depending on your life, DL systems may not have a direct impact on you at all.  Plumbers, for example, are unlikely to care.  Insurers, actuaries and other folks whose livelihoods depend on statistical analysis, however, had better sit up and take notice.  The initial “green field” territories where statistics are the primary function have already been broadly affected by Deep Learning.  Today, areas such as FinTech and advertising sales are steadily moving to use DL in certain aspects of their business.  Self-driving vehicles are a perfect example of another “Deep Learning” application.  What do you think those first few highly-publicized autonomous vehicle voyages were a few years back?  Supervised training.  They were teaching the vehicles how not to get into wrecks.

We’re just beginning to see learning systems entering healthcare and other more ‘soft’ sectors.

And here is where the warning bells sound.  Not because SkyNet is going to set off the rise of the machines (though there is some legitimate reason to be concerned in that regard, particularly when you see robot chassis and drones armed with weapons).  No, the concern should presently be directed at how these tools get used.  As I mentioned, these are powerful systems that can be used to great benefit – and can also be used to do great harm.

For example, one of the innocuous sentences I’ve seen with regard to the application of a learning system to healthcare was:  “Given the patient’s past history, and their medical claims, are you able to predict the cost for the next year?” (Healthcare IT News).  Okay, in context, that question was raised with the intent of predicting utilization, how much hospital care might be needed across a population.

But what if the question is “Find the best combination of interest rates to keep people paying their credit card bills without completely bankrupting them, and to maintain their indebtedness for the longest period.”   In that case, Deep Learning can be used to figuratively enslave them.

What if that question was asked by an insurance executive in the USA, wanting to see where the profit line cuts and using that data to kick people off their insurance who would negatively impact the company’s margin?  In that case, Deep Learning can be used quite literally to kill people.

The tools will only be used within the ethical boundaries set by the persons who use them.  In the United States and several other countries, there are certain political parties who feel that ethics have no place in business – that might makes right.  Just as with dangerous vehicles, dangerous weapons, and other hazards, we as members of our societies must make our voices heard – through the voting booth, in our investment choices, in journalistic endeavors – and ensure that these tools are used to benefit, not harm, the public.

It might even be worth considering that from a software engineer’s perspective, perhaps it is time to establish something similar to the medical professional’s Hyppocratic Oath:

First, do no harm.

Posted in Business, Business, Development, IT, Politics, Programming, Science, Software | Tagged , , | Leave a comment

Leadership of IT/Software Teams in a Regulatory Environment

Part 2 – How Does This Apply to IT?

All right, so we’ve laid out what it means to be a leader – but how does this apply to IT?

At this stage we have to look at what it means to work in an IT environment – and there are some wildly different attitudes to deal with.  Let’s approach this from the two extremes, as the rest fall into an in-between type.  I will differentiate them using names that are general stereotypes, so please forgive me if you feel yourself categorized…my intent is to speak of generalizations, not specific persons.

The two types are:

Administrators:  generally these are “IT Managers”, “System Administrators”, “DBAs”, “Network Administrators”, or various other similar names.  The person with one of these labels is generally tasked with keeping things running smoothly so that users on the computer system can perform their tasks efficiently.  As a result, these persons are driven largely by the avoidance of downtime – and this means maintaining the network’s status quo.  All operations around operations such as monitoring for fault, storage maintenance, backups and recovery, etc.

…and…

Developers:  these persons are focused on creating new systems and software to enable users to accomplish their tasks in innovative or more effective ways.  These can be web developers, application developers, front-end developers, etc.  This career is dominated by a constantly-changing landscape of new languages and architectures, and where the Administrator has a fanatical devotion to defense of the existing systems, Developers have an equally devoted attitude towards inventing new systems.

Naturally, these two come into conflict when Developers have new systems they wish to add to the production networks.  Lately (in the last 5-10 years), this meeting point has seen the growth of a new “bridge” career, DevOps.

Additionally, let’s not overlook the elephant in the room here.  IT staff have a reputational stereotype of being less than optimal in their social skills.  I think we can flatly state that there is more than just a grain of truth to this, and in handling your staff, one should take this into account.  This isn’t making excuses for them, it is preparing you to handle them effectively.

Despite their differences, each of these career track personnel has many similarities when it comes to the exercise of leadership.  This is not an exhaustive list – just the major bullets.

Desire for Respect

Let’s face it – most of these individuals have an ego, and because they are not especially social creatures, they tend to be a bit sensitive.  Fortunately, as many of these persons tend towards an introverted personality, they don’t demand a great many special efforts in this regard, and often find displays to be uncomfortable.  That doesn’t make their need for respect any less valid, however.  In many ways, it actually makes it more challenging.  Often it can be best shown with an honest thanks for contributions.

Unlike a salesperson, whose recognition is often quite public and rewarded monetarily (my suspicion of why this is, is because it has been so easy to measure the performance of a salesperson in the volume of revenue they generate), awarding respect to an IT person tends to require a more personal touch.  It shows you know what they are doing for you (even if you don’t necessarily understand the nitty-gritty of it), and that you see the effort it requires to accomplish what they do.

Respecting your staff shows them that you do care, that you have commitment to them, is integral to good communication with them, and is a source of motivation.

Need for a Quality Work Environment

IT Workers spend a great deal of their time thinking.  This level of thought requires uninterrupted time spent researching, exploring, experimenting, or even simply sitting.  It is the antithesis to the “buzzy” open office which seems so popular among business-degree managers the last couple decades.  (One can easily unearth enough research to choke a horse demonstrating how bad an open office is for any worker at all, much less IT staff.)

I often tell people “IT isn’t rocket science,” in a turnaround on the classic cliché – “Because rocket science is only first-year physics, and IT is harder than that.”  To perform this sort of thought-work, one must have a work environment that protects the workers’ concentration.

In addition to a space to work in, proper tools and equipment are also needed.  IT workers can easily grow ‘stale’ relative to the rest of the industry if they don’t keep up to speed on the latest developments every so often, so a training budget and equipment budget should be included as part of the budgeted aspect of an IT position.

Providing a quality work environment shows care, expresses your knowledge of what they require, and demonstrates that you are committed to them.   

Desire for Recognition

I mentioned just before that recognizing a salesperson’s contributions is an easy affair – it can be tied directly to that person’s revenue generation in the form of commissions.  Measuring the contributions of an IT worker is much more difficult – and should be a major effort.  As leaders and managers, we must ensure that we put effort into recognizing the efforts of our IT staff.  This is a particular failing many companies have when dealing with Administrators in particular, because most non-IT persons really only think of the Admins when something breaks.

Quite often, the hardest tasks an IT person performs also end up being the least visible.  Ironically, some of the simplest things they do end up generating the most visible results.  This is enough of a truism that I will often instruct my junior employees that when someone thanks them profusely for something that really didn’t require a great deal of action, they should save those kudos and hold onto them for the time when they really do put a lot of blood and tears into something – because those big heavy tasks are often the ones that “keep the lights on” and the users never even know they happened.

So the question here is how do we find events worthy of recognition in a group whose events are not necessarily widely visible?  There are a few ways to approach this:

  1. Observe events around the world. IT is not restricted to geography any longer, and these people are guarding you from a wide variety of threats as well as building new systems for you.  The worm “NotPetya”, for example – this was a global event that cost in the ballpark of half a billion dollars in the firm Maerck alone.  Did your network suffer from it?  If the answer is ‘no,’ then there is a good example of where your networking and system administration personnel did their jobs well.
  2. Establish a career track for tech employees. Often, firms will have career tracks that have only one lane, and end up in a business office.  Following this track is both limiting – there can only be so many managers – and crippling:  by promoting a successful IT staffer, you can gain a crappy manager at the expense of a good tech.  If, on the other hand, you put together a “graded tech track” with steps that have both titular and compensation benefits, you can establish a clear path of recognition that IT staffers can aspire to and excel in.
  3. Budget for career-based training. This is closely tied to ‘quality work environment,’ but is also a recognition factor – you recognize value in an employee by keeping him/her relevant to moving technology.  Each of your employees has a salary figure and an overhead figure that goes into your budget.  Training costs should be included in that overhead figure – enough to send someone to a week’s training once a year is what I’d advise.  Tell your employees and have them choose what that training money gets spent on.  Inform them to pick something relevant to the job, or sell you on why it is relevant if you don’t see it.  Once they have something, send them.  Give them on-the-spot bonuses for becoming certified in some technology.  Pay for the first exam and maybe even a re-try if they don’t pass the first time.  A classic story about this, I don’t recall where I first heard it:

Manager: “I want to spend $x to send so-and-so to training”

C-level:  “Why?”

Manager:  “Because it’ll make them better employees.”  (Duh.)

C-level:  “What if I train them and they leave?”

Manager:  “What if we don’t train them and they stay?”

Recognition of your staff sends a message – it is clear communication.  It also falls into the zone of commitment, care, and demonstrates that you have a personality that shows long-term integrity.  Not surprisingly, it is also a great source of motivation, so long as it is provided with fairness.

Need for Downtime

Not everyone can be constantly on an A-type personality frenzy bender.  IT persons in particular don’t enjoy stress situations.  They can burn out just like anyone else, and quite often the crises they deal with have a far more strategic impact than most of the rest of the company.  You want to make sure that IT staffers have a clear head when tackling your businesses’ problems, because if they don’t, the repercussions can be more far-reaching than you wanted.  So…provide them with some downtime.  Many startups see this in the form of games (ping-pong tables, foozball, etc.), and the best ones recognize it with time itself.  I recommend giving your staffers a “20% buffer” – meaning that during a given week, they can spend a day’s worth of time researching new stuff, exploring new tech, etc.  Back when I first started in contracting in the 90s, our firm gave us a mandate that 80% of our time needed to be spent on billable action, the remaining 20% was ours.  Building stuff, reading up on new tech, whatever.  This was a really great way to let off some steam, and the team knew not to abuse it.

Some admins are on call 24/7.  Many developers and testers will spend loads of extra hours at crunch-time before a release.  Remember when they have to put in those extra hours, and give them downtime to compensate.

IT people also tend to get lost in tunnel vision very easily, spending far more hours in the office than they should, and this can cost them in their home lives.  When the 5 O’clock hour hits on a regular day, tell them to go home.  Help them keep a good work-life balance.  A burned-out employee who quits the job after two years is of no use to you, and letting them burn out that way negates the need for care that they trust you to have.  You need them to be willing to spend that extra time when it’s needed, and never take it for granted.

Providing downtime again demonstrates you care, that you have the knowledge necessary to lead them, that you are committed to their well-being and long term career, and motivates them to learn more to help them excel in their jobs.

Mediation

Lastly, I want to focus on an aspect of your role binding the entire team together.  At the beginning of this part, I pointed out two separate groups that have conflicting agendas – admins, who wish to maintain a status quo of sorts, and the other actively seeking to change it.  These two groups often come into conflict, and they also come into conflict with other parts of the company.  It might be professional, it may very well be personal friction.  Whatever the cause, if a serious conflict is left to fester it can damage your team irreparably.

Whenever these conflicts arise (and let’s assume you know which ones can resolve themselves successfully and which ones require your intervention), your role becomes that of a mediator.  I strongly suggest you enroll in a mediation communication class, or at least read a few books on the subject (surprisingly a lot of books focused on relationship therapy can provide some insight here as well).  This skill is an absolute must have if you wish to be a leader rather than simply a manager.  A famous line about proper mediation is that when a compromise is found, neither side goes away happy.  However, as a mediator you can at least see to it that the sides also don’t go away mad.

Proper mediation relies entirely on your skill as a communicator, and will strain your listening muscles heavily.  It is, however, a key element in demonstrating to your team that your integrity is of value to them.

Summary

The aspects of leadership play out in a lot of ways – subtle and not so – with IT workers, who are a rather unique bunch.  Their needs and desires

The six qualities of leadership (care, personality, knowledge, motivation, commitment, and communication) all contribute in different ways to meet the unique needs of the IT team.  When you meet those needs for recognition, respect, downtime, fair mediation, and provide these in a quality work environment, you actively use each of the six to help your team.  And as your actions serve as mechanisms for communication, the team will recognize that you are leading, not just ‘managing’ or ‘ordering.’

Posted in Business, Business, IT, Leadership, Programming, Software, Work | Tagged , , , , | Leave a comment

Leadership of IT/Software Teams in a Regulatory Environment

Leadership of IT/Software Teams in a Regulatory Environment

Thomas Theobald

Part 1 – What Exactly Is “Leadership”?

leadership

/ˈliːdəʃɪp/

noun

“the action of leading a group of people or an organization, or the ability to do this.”

 

Not exactly helpful, are they?  Wikipedia is slightly better:

 

“is a formal or informal contextually rooted and goal-influencing process that occurs between a leader and a follower, groups of followers, or institutions. The science of leadership is the systematic study of this process and its outcomes, as well as how this process depends on the leader’s traits and behaviors, observer inferences about the leader’s characteristics, and observer attributions made regarding the outcomes of the entity led[1].”  (Antonakis, John; Day, David V. (2017). The Nature of Leadership. )

 

There are thousands upon thousands of different works on the subject of leadership, going back farther into history than the Roman Empire.  What it really boils down to, at its core, is that to lead is to influence others into behaviors, commitments and actions in service of one’s goal(s). 

 

Let’s dive in a little.  We’re going to explore what it means to exercise “Leadership”, what it takes to gain that influence over others.

 

First, we must recognize that influence over others isn’t something we can take – it is something that is given.  Whether voluntarily or through coercion, a person chooses to be influenced by someone.  That choice may not be consciously made…it may be something baser, more instinctual, and quite often this is the case whether one is in an office, a political party, a religious revival, or other venue.  It appeals to not only the comprehension of the team, but also to their emotional triggers.

 

Less a mechanistic approach to guiding persons, leadership really is much more a “way of living” in the context of one’s colleagues.

 

To exercise Leadership then, is a bit like playing a psychological game, guiding others thought processes to coincide with one’s own.  Just as each person owns his or her own emotions, team members must make their own commitments to the team – a leader finds ways to enable them to make those commitments.  How does one do that?  The United States Marine Corps focuses on six factors that lead to successful leadership:

 

Care

This, I feel, is the most important distinction between a leader and a manager.  A leader must care about the persons entrusted to them.  That care is demonstrated by actions that follow one’s promises and commitments.  When you care for your team, you engender a protective atmosphere for them, enabling them to feel safer around you – you become their guardian as well as their colleague.

Personality

A specific personality type is not absolutely necessary to exhibit leadership, though some personalities find the communications aspects easier.  I think it may be easier to relate this idea of ‘personality’ rather as sincerity, or genuineness of the leader.  When you are sincere with your team, they know they can place trust in you – another safety factor.

Knowledge

Having boundless knowledge on the relevant topics is very helpful, of course – but is it really necessary?  Not so.  Understanding the limits of one’s knowledge is equally important.  The most important aspect of knowledge, though, is recognizing where it is present in your team and promoting team members’ own knowledge.  When you use your team’s knowledge wherever and whenever possible to enable them to step forward, even if you already can answer the need yourself, you build your team’s confidence in themselves – you promote their ability to excel, strengthening them.

Motivation

Succinctly, motivation in this context is the strength of effort used in the persistent pursuit of one’s goals.  Maslow’s hierarchy of needs shines a light directly on the source of the motivation of every human:  we all share physiological and safety needs; we quite often share the needs related to belongingness, love, and esteem; and we are all usually quite unique in our self-actualization needs.  A real leader finds ways to bind the needs of the team at various levels of the hierarchy together, directing the members towards a common goal – motivating them all by giving them ways to self-motivate.

Commitment

Closely tied to the concept of sincerity I mentioned above with Personality, Commitment means to support the team with no reservations, during good and bad times.  It embraces the team as a living thing, a culture, rather than a tool that can be pulled from a satchel and put away when not needed. When you commit to the team, you are accepting that you work together as a common course of action, which engenders similar action by team members, creating a self-reinforcing group. 

Communication

Finally, we come to communication, the bindings that tie all the others together.  A person simply cannot be a good leader if they are not also a good communicator.  Transparency, clarity of vision, visibility, and receptivity are key facets required of a good communicator.  When you can spell out the goals set for the team clearly, make yourself constantly available to the team, and actively listen to the voices of the team, this skill at communication earns the respect of your team.

 

 

Summary

As I mentioned before, there are theories on leadership that go back thousands of years, so one could just as easily pick a definition from any period along the way to compare against.  However, I think it will suffice in this context to use the USMC six traits – surprisingly enough, leadership is a near-universal concept, whether in the uniform of a military officer or the business-casual of a startup CTO.

 

Notice that each of the six focuses on strengthening the individual members of the team – and tend to de-emphasize the role of the leader her- or himself.  This is not to say the leader should play no role, but rather that a successful leader encourages the team to excel.  The most important take-away for aspiring leaders is:

 

This isn’t about you at all.  It’s all about the team.

 

In the next section, we’ll talk about how these apply in a software development and IT arena.

 

 

 

Posted in Business, Development, Health, IT, Leadership, Programming, Software, Work | Tagged , , , , | Leave a comment

RAID-5 and the Sky Is Falling

The Situation

I’ve seen, over the last few weeks, more than a few posts on a popular IT hangout site proclaiming in loud, evangelical voice that “RAID-5 is terrible for spinning disks, never use it!  If you do, you’re a stupidhead!” and similar statements.

I’m here to tell you that’s not an appropriate answer.  In fact, it’s tunnel-vision BS.

I’m also here to remind you that RAID is not a backup – it is avoidance of downtime, and it is reliability of storage.  If you are relying on a RAID array to protect you from data loss, you need to add some extra figures to your budget.  You cannot have your production system also be your backup repository.  If you think that you are safe because all your production data is on a RAID, and you don’t bother with a proper backup, you are going to be in deep kimchee when you have a serious issue with your array.

Now, I suspect there is a kernel of truth inside the concern here – it seems to stem from an article written last year whose theme was “Is this the end of RAID-5” or something similar.  That article was quite accurate in its point – that with the escalating size of drives today, and the numbers of them we are using to produce our volumes, it is inevitable that a drive failure will occur – and that during a rebuild, it becomes a mathematical likelihood that a read error will result in a rebuild failure.

All quite true.

But in the reality of many of the conversations I’ve seen the doomsayers trumpeting their end-of-the-world mantras, volume sizes simply do not justify the fear.

Let’s  take a realistic look at RAID fails, and figure out the real numbers, so we can all breathe a little calmer, shall we?

As a goal for this article, I want to give you the ability to calculate the odds of data loss in your own RAID systems when we’re done.

First off, we have to look at the risk we are mitigating with RAID…drive failures and read failures.  Both come down to a small percentage chance of failure, which is best tied to the figures “Annualized Failure Rate” (AFR, which represents the % of drives that die in a year) and “Unreadable Read Error” (URE, which represents an attempt by an array to read a sector and fails, probably due to Bit Error).

Google wrote a paper on drive fails about ten years ago, which showed that drives which don’t die in the first few months of life generally last for five years or so before their AFR starts getting up on about 6%-8%, which is generally considered unacceptable for datacenter or other usage that requires reliability.  As it happens, BackBlaze (backblaze.com) is a DC that publishes its own empirical hard drive mortality stats regularly, so these figures can be updated in your own records using accurate data for the brands of drive you use.

The most current BlackBlaze chart as of the time of this writing can be found here:  https://www.backblaze.com/blog/hard-drive-stats-for-q1-2018/

So let’s begin, shall we?

During this article, I’m going to spell out several different scenarios, all real-world and all appropriate for both SMBs and personal operations.  I have direct and hands-on with each of them, and it is my hope you’ll be able to perform the same calculations for those arrays within your own sphere of control.

Array 1:  4 Western Digital Red drives, 4TB each in a RAID-5 array.

Array 2:  4 HGST NAS drives, 8TB each in a RAID-5 array.

Array 3:  8 Western Digital Red drives, 6TB each in a RAID-6 array. (we’ll also run over this in RAID-5 just to be thorough)

Array 4:  12 Seagate Iron Wolf Pro drives, 10TB each in RAID-6 (as with the above, we’ll hit it at RAID-5 too)

Array 5:  12 Seagate Enterprise Capacity drives, 8TB each in RAID-6 (and RAID-5)

Array 6:  12 Seagate 300GB Savvio drives, RAID-5

Array 7:  7 Seagate 600GB Savvio drives, RAID-5

(Note:  Enterprise Capacity drives have been re-branded by Seagate and now go by the name “Exos”)

We start by collecting fail rates on those drives, both annualized fail rates from the empirical charts at BackBlaze, and the averaged bit-read error rate.  Note that AFR increases with age, high temperature, and power cycles; it lowers for things like using Helium as a filler (despite this making all your data sound like it was recorded by Donald Duck).  The bit error rate figures are drawn directly from the manufacturer’s sites (and can often be found as BER, “bit error rate”), so there will be some ‘wiggle room’ in our final derived figures.

Drive Annualized Failure Rate Bit Error Rate
WD Red 4TB 2.17% 1 per 10e14
HGST NAS 8TB 1.2% 1 per 10e14
WD Red 6TB 4.19% 1 per 10e14
Iron Wolf Pro 10TB 0.47% 1 per 10e15
Iron EC 8TB 1.08% 1 per 10e15
Seagate Savvio .3TB 0.44% 1 per 10e16
Seagate Savvio .6TB 0.44% 1 per 10e16

For reference, the reason why people often follow up the statement “RAID-5 is crap” with “unless you use an SSD” is because SSDs have a BER of around 1 per 10e17 – a BER on an SSD is extremely rare.

With these figures, and with the sizes of the arrays and their types known, we can prepare the variables of the equation we’ll build.

Num:  Number of drives in the array

ALoss:  Allowed loss – the number of drives we can afford to lose before unrecoverable data loss occurs.

AFR:  Annualized Failure Rate (derived from empirical evidence)

URE:  Unrecoverable Read Error, this is the same as “Bit Error Rate” above

MTTR:  Mean time to repair – this will vary depending on your drive sizes, cage controller(s), memory, processor, etc.  I’m going to just plug in “24 hours” here, you can put in whatever you feel is appropriate.

We’re also going to be playing a probability game with these, since we don’t know exactly when something is going to blow out on us, we can only assume statistical probability.  To set the stage, let’s play with a few dice (and that’s something I know quite a bit about, having written a book on Craps some decades ago).  We want to establish the probability of a particular event.

The probability of something is = number of sought outcomes / number of total outcomes

Starting simple, we’ll use a six-sided die.  We want to prepare an equation to determine the odds of rolling a one on any of ten rolls. 

So our sought outcome is 1.  Number of total outcomes is 6.  That gives us 1/6, or 0.1667.

We’re trying ten times, which complicates matters.  It’s not simply additive.  It’s multiplicative.  And when we’re collating multiple independent events, we multiply the odds of each event against each other.  Probability of two events A and B happening together, then, are Prob(A) * Prob(B).  If we were asking “what are the odds of rolling a one on each of ten rolls” it would be pretty easy.  But that’s not the question we’re asking.

The question we’re asking is what are the odds of one or more of the rolls being a one?

We have to invert our approach a bit.  We’re going to start with 100% and subtract the chance of not getting a 1.  If we determine the odds of avoiding a 1 on every single roll, then the chance of getting a roll on any one of our rolls is the inverse of it.  The odds of not getting a 1 when we roll are 5/6, and there are ten tries being made, so (5/6) raised to the 10th.  Then we simply subtract that from 100% to get our answer.

(5/6) raised to the 10th is (9,765,625 / 60,466,176), which is 0.1615 – I rounded a bit.

1-0.1615= 0.8385, which is our result.  The odds of rolling a 1 on any of ten individual rolls is 83.85%.

RAID Types

A little backgrounder on types of RAID for the uninitiated here – and there’s no shame in not knowing, this stuff is pretty dry for all but the total platterhead.  I guess that means I’m a bit of a dork, but what the hell.

RAID means “Redundant Array of Inexpensive Disks” and first became popular commercially in the late ‘80s and early ‘90s, when hard drives were becoming economically a big deal.  Previously, a strategy called “SLED” was considered the go-to model for storage, and it represented “Single Large Expensive Disk”.  RAID took over, because it was a lot more economical to bond multiple inexpensive units into an array that offered capacity equal to a drive which would cost far more than the combined cost of the RAID.

Different RAID types offer different advantages.  Importantly, all of them are considered for use as volumes, just like you’d consider a hard drive.  These aren’t magic, they’re just volumes.  How you use them is up to you.  When you store production data on them, they need to be backed up using smart backup practice.

Most mentions you’ll see regarding RAID include various numbers, each of  which means something:

RAID 0 – this form of raid uses at least two disks, and “stripes” data across all of them.  This offers fast read performance, fast write performance.  Usually this RAID limits its use of any physical drives to the size of the smallest in the group (so if you have three 4TB and one 6TB, it will generally only use 4TB of the 6TB drive).  This RAID also provides the used capacity in full for storage, so 3 4TB drives will make a 12TB RAID 0 volume.  This RAID adds vulnerability:  if any one of the drives in the array is lost, you lose data.

RAID 1 – this is “mirroring”.  It uses an even number of disks (usually just two), and makes an exact copy of volume data on each drive.  They don’t have to be the same size, but the volume will only be as big as the smallest drive.  Benefit is fast reading (no benefit in write speed) and redundant protection – if you lose a drive, you still have its mirror.  It also is fast to create, as adding a second drive only requires that the new drive receive a copy of the other.  The performance benefits are limited only to the speed of the slowest member of the array.  This method gives up 50% of the total drive capacity to form the mirror.

RAID 2 – it’s unlikely you’ll ever see this in your life.  Uses a disk for parity information in case of loss of a data disk.  It’s capable of super-fast performance, but it depended on coordinating the spin of all disks to be in sync with each other.

RAID 3 – Also extremely rare, this one is good for superfast sequential reads or writes, so perhaps would be good for surveillance camera recording or reading extended video tracks.  This also uses a parity disk similar to RAID 2.

RAID 4 – another rare one, suitable for lots of little reads, not so hot for little writes, also uses a dedicated parity disk like 2 & 3.

RAID 5 – this is currently the most common form of raid.  It stripes data among all its drives, just like RAID 0, but it also dedicates a portion of its array equal to the capacity of one of its disks to parity information and stripes that parity information among all disks in the array.  This is different from the previous forms of parity, which used a single disk to store all parity info.  RAID 5 can withstand the loss of any one disk without data loss from the array’s volumes, but a second drive loss will take data with it.  This array type has an advantage in write speed against a single disk, but not quite as good as RAID 0 since it has to calculate and record parity info.

RAID 6 – this basically takes the idea of striped parity in RAID 5 and adds redundancy to it:  this array stores parity info twice, enabling it to resist the loss of two drives without data loss.

RAID 10 – this is actually “nested” RAID, a combination of 1 (striping) and 0 (mirroring).  This requires at least four disks, which are striped and mirrored.  Usually this is done for performance, and some data protection.  It’s a little bit more protected than RAID 5, in that it can withstand the loss of one drive reliably, and if it loses a second, there’s a chance that second drive won’t cause data loss.  However, this one gives up 50% of the total drive capacity to the mirror copies.

There are also a series of other nested forms of RAID, but if you need those you’re well past the scope of this article.

Parity

Credit: Wikipedia

In RAID terminology, “Parity” is a value calculated by the combination of bits on the disks in the array (most famously an XOR calc, but different vendors can stray from this), which generates a bit value which is recorded in the “parity” bit.

In the image here of a RAID 6 array, the first bit of stripe A’s parity would be generated by taking the first bit of each A1, A2, and A3, and performing a sequential XOR calculation on them.  This would produce a bit that is recorded on both Ap and Aq.  Later, if a disk fails – say Disk 0 bites it – then the system can read the data from the bits in A2, A3, and Ap or Aq to figure out what belongs where A1 used to be.  When a new drive replaces the failed Disk 0, that calculation is run for every bit on the disk, and the new drive is “rebuilt” to where the old one was.

There’s also an important point to be made about the types of parity you’re looking at in that image.  There are multiple ways to calculate the parity bit that is being used.  In RAID 5, the most common is an XOR calculation.  In this method “bit number 1” on each data stripe is XOR’ed with the next one, and then the next, etc. until you reach the parity stripe and the result is then recorded there.  Effectively this is a “horizontal” line drawn through each disk, ending in the parity stripe.  So when you need to know what was on that data disk (whether rebuilding or just reading), it can be re-constructed by backing up that XOR equation.

And then…the gods rose from R’lyeh to lay down RAID 6 parity.

Most RAID-6 uses an encoding method for its extra parity called “Reed-Solomon” (this method is used in a lot of data-reading applications, like barcode scanners, DVD readers, and low-bandwidth radio data transmission).  This method manages to record parity against a second missing piece using other data in the array – RS encoding builds its parity using an algorithm that generates something like a scattergram of source bits, both vertical and horizontal (which makes it resistant to the loss of a second data disk – if it just copied the XOR result of the first disk then a second data disk would corrupt the intent).  I’m not going to pretend I understand the Galois Field and other heavy-duty math behind this stuff, I just know it exists, it is commonly used for RAID-6, and Dumbledore or the Old Ones were probably involved somewhere along the way.  It costs more CPU- and IO-wise, which is why it isn’t commonly used in RAID-5.

(I say “Most” RAID-6, because other vendors can use different methods – for example, Adaptec has their own proprietary algorithm in their hardware controllers, different from RS, but the functional result to us as users is the same.)

Data Loss

What is it that takes us into data loss territory?  Obviously, dropping the entire cage while powered up and running will get us there fast.  Let’s make the assumption that if something along those lines were to occur, you’d have an entirely different set of problems, and you wouldn’t have time to be perusing this article. Instead, we’ll focus on natural wear-and-tear.  To get to data loss, there are three steps:

  1. Initial drive failure(s), and…
    1. Enough drive failures in any point before preservation exceeding our acceptable loss

Some other topics we’ll talk about:

  1. Possibly drive failure during rebuild (I’ll tell you towards the end here why you should have caution before starting that rebuild)

…and/or…

  1. Read error during rebuild (this is why 2 will require caution)

This brings me to a very important point, and one around which this entire discussion revolves:  protecting your data.  I think the entire “RAID-5 is poopy” argument stems from the forgetfulness that one must never rely on RAID levels as the only protection of your data.  RAID serves to make you a nice big volume of capacity, and protects your uptime with some performance benefits.

It does not magically provide itself with backup.  You have to back it up just like anything else.

So if you’re creating a 3TB array, get something that can back that array up and has the capacity on reliable forms of storage to keep your data safely.

Drive Failure

First Failure

The initial drive failure is a compound figure of the AFR by the number of drives, and we’ll figure it on an annual rate. This part is pretty simple, let’s go back to our dice equation and substitute drive values:

Drive Loss Rate = what are the odds at least one drive will die in a year?

If it’s just one drive, that’s easy – use the AFR.

But it’s multiple drives, so we have to approach it backwards like we did with dice rolls.

So it’s 100% minus (1-AFR)eNumberOfDrives.

For my Array 1, for example:  those WD’s have an AFR of .0217.  Plugging this into the equation above yields:

100% – (1-AFR)e4 = 100% – 91.59% = 8.41%

So I have about an 8.41% chance of losing a drive in a given year.  This will change over time as the drives age, etc.

Drive Failure Before Preservation

So let’s assume I lost a drive.  I’m now ticking with no redundancy in my array, and what are my chances of losing another to cause data loss during the window of time I have to secure my data?

This one is also pretty simple – it’s the same calc we just did, but we’re doing it only for the gap-time before we preserve the data and for the remaining drives in the array.  Let’s use two examples – 24 hours, and two weeks.

24 hours:  1 – (1-(AFR * 0.00273))eN

Where AFR is the AFR of the drive, N is the number of drives remaining.  The 0.00273 is the fraction of a year represented by 24 hours.

2 weeks:  1 – (1-(AFR * 0.0384))eN

0.0384 is the fraction of a year represented by 2 weeks.

If it’s my Array 1, then we’re working with WD reds which have a 0.0217 AFR.  I lose a drive, I have three left.  Plugging those values in results in:

24 hours:  1 – (1-(0.0217 * 0.00273))e3 = 1 – (0.99994)e3 = 0.0001777, or 0.01777% chance of failure

2 weeks:  1 – (1-(0.0217 * 0.0384))e3 = 1 – (0.99917)e3 = 0.002498, or 0.2498% chance of failure

We now know what it will take for my Array 1 to have a data loss failure:  8.41% (chance of initial drive failure) times the chance of failure during the gap when I am protecting my data.  Assuming I’m a lazy bastard, let’s go with 2 weeks, 0.2498%.

That data loss figure comes out to be 0.021%.  A little bit more than two chances in ten thousand.

Based on that, I’m pretty comfy with RAID-5.  Especially since I take a backup of that array every night.

Unrecoverable Read Error

This figure is generally the one that strikes fear into people’s hearts when talking about RAID-5.  I want to establish the odds of a read error occurring during the rebuild, so we can really assess what the fearful figure is:

What is Bit Error Rate?  In simple terms, BER is calculated as (# of errors / total bits sent-read).  Let’s find a way to translate these miniscule numbers into something our brains can grok, like a percentage.

To start, we’re reading some big quantities of data from hard drives, so let’s bring that into the equation too – there are 8 bits in a byte, and 1,000,000,000 bytes in a Gigabyte.  Add three more zeroes for a Terabyte.

Be aware that some arrays can see a failure coming, and have the ability to activate a hot-spare to replace the threatened drive – most SAN units have this capacity, for example, and a lot of current NAS vendors do as well.  If yours can’t, this is where you should be paying attention to your SMART health reports, so you can see it coming and take action beforehand.  Usually that action is to install and activate a hot-spare.  If you have a hot-spare and it gets activated, it receives a bit-for-bit copy of what’s on the failing drive, and then is promoted to take over the position of the failing disk.  This avoids rebuild errors and is much faster than a rebuild, but it doesn’t protect from BER, so if there’s a bit error during the copy then the incorrect bit will be written to the new drive.  This might not be a big issue, as many file formats can withstand an occasional error.  Might even be that the error takes place on unused space.

Rebuilds of an array are another case entirely.  The time required is much greater, since the array is reading every single bit from the remaining stripe data on the good drives, and doing an XOR calc using the parity stripe to determine what the missing bit should be, and writing it to the new drive.  During a rebuild, that bit error poses a bigger problem.  We are unable to read, ergo we can’t do the XOR calc, and that means we have a rebuild failure.

(If we’re in RAID-1, by the way, that’s a block-for-block copy from a good drive to the new drive – bit error will end up copying rather than calculating, so there won’t be a failure, just bad data.)

If we had a hot spare, we’d be out of the woods before having to rebuild.  But let’s keep looking at that rebuild.

Translating that BER into how likely we have for a rebuild failure…the math gets a little sticky.

UREs, just like drive fails, are a matter of odds.  Every bit you read is an independent event, with the odds of failure being the bit-read chance that we collected about the drive.  The probability equation comes out looking like this:

Let’s apply the probabilities we started with at the beginning of this article to the drives in my Array 1 now.  A reminder, these are WD Red 4TB drives.  Western Digital sets a BER value of 1 per 10e14.

Array 1 blows a drive.  I’ve got three left, and a new 4TB I popped into the array.  I trigger the rebuild.  We’ve already said 24 hours, so we’ll stick with that (technically it’s closer to 10h for a 4TB, but big deal).

Edit 10.10.2018 – I have identified a mistake in my calcs here courtesy of the Spiceworks forum.  Parity data is being read from more drives than I originally laid out.  by the time you read this, the information below will have been corrected.

My array now has to perform three reads (two data and one parity) to get each value to be written to the new drive – a read on the stripe, and a read on the parity stripe.  So I’m actually reading twice the volume of the target drive.

4TB is 4,000,000,000,000 bytes.  Three times that is 12,000,000,000,000.  8 bits per byte means 96,000,000,000,000.  Which is a crap-ton of bytes.

However, 10e14 (the BER of our WD drives) is 100,000,000,000,000.  That’s an even bigger crap-ton.  Not that much bigger, but bigger.

So let’s ask the question, and plug in the numbers.  The question:

During my rebuild, what are the odds of rolling a mis-read on any of my 96,000,000,000,000 reads?

As before, let’s invert this question and ask instead, what are the odds of not rolling a mis-read on every one of our reads? and then subtract that from 1.

Odds of successful read on each of these reads is 99,999,999,999,999 / 100,000,000,000,000.  We’re trying 96,000,000,000,000 times.  Most of our PCs can’t raise something to the 96-trillionth power, I’m afraid.  Even Excel’s BINOM.DIST will barf on numbers this size.  You’re going to need a scientific calculator to get this done.

1 – (99,999,999,999,999/100,000,000,000,000)e96,000,000,000,000 =

1 – (.99,999,999,999,999)e96,000,000,000,000 =

(now you’re going to have to trust me on the following figure, I got it from the scientific calculator at https://www.mathsisfun.com/scientific-calculator.html)

1 – 0.38318679500580827 = 0.6168132049941917

So the odds of a BER giving my Array 1 a bad case of indigestion is 61.68%.  That’s a pretty scary figure, actually, and I’ll get to the mitigation of it later.  It’s this kind of figure that I think generally gives people enough of the willies to make that crazy “RAID-5 is for poopyheads!” proclamation.  Very likely because the people who make that claim assume that this is the end of the road.

Thankfully, we’re looking at odds of data loss.  Not necessarily rebuild failure, though that does factor into the odds of loss.

The Equation for Data Loss

In order to have loss of data, basically we have to lose a number of drives that our array cannot tolerate, before we can protect or preserve that data.

Let’s say that window of time comes out to two weeks.  That’s probably a lot more than we need, so it will inflate the odds to a conservative number.  Two weeks is 336 hours, .038 of a year.

So given that, the basic odds of data loss are:

For RAID-5, we need to lose a second drive for data loss.  That means odds of Initial loss * odds of another loss during window (remember that these are multiplicative, not additive).  If all the arrays I mentioned above were RAID-5, and using the “lazy bastard” two-week window, here’s where we’d be:

Array and # drives Drive Type Annualized Failure Rate Odds of Initial Loss Loss during Window Total Chance
1 – (1-AFR)eN 1 – (1-(AFR * 0.0384))e(N-1) Initial * Window Loss
Array 1 – 4 drives WD Red 4TB 2.17% 1-(1-.0217)e4 = 8.41% 1-(1-(.0217*.0384))e3 = 0.25% 0.00021, or 0.021%
Array 2 – 4 drives HGST NAS 8TB 1.2% 1-(1-.012)e4 =

4.7%

1-(1-(.012*.0384))e3 = 0.138% 0.00006486, or 0.0065%
Array 3 – 8 drives WD Red 6TB 4.19% 1-(1-.0419)e8

= 28.99%

1-(1-(.0419*.0384))e7 = 1.12% 0.003249, or

0.3249%

Array 4 – 12 drives Iron Wolf Pro 10TB 0.47% 1-(1-.0047)e12 =

5.94%

1-(1-(.0047*.0384))e11 = 0.216% 0.0001283, or

0.01283%

Array 5 – 12 drives Iron EC 8TB 1.08% 1-(1-.0108)e12 =

12.217%

1-(1-(.0108*.0384))e11 = 0.455% 0.0005559, or

0.05559%

Array 6 – 12 drives Seagate Savvio .3TB 0.44% 1-(1-.0044)e12 =

5.154%

1-(1-(.0044*.0384))e11 = 0.1857% 0.0000957, or

0.00957%

Array 7 – 7 drives Seagate Savvio .6TB 0.44% 1-(1-.0044)e7 = 3.04% 1-(1-(.0044*.0384))e6 = 0.1013% 0.0000308, or

0.00308%

I think the values above show definitively that RAID-5 is a perfectly viable storage mechanism.

RAID-6 Enters the Fray

With RAID-6, we’re now adding a second parity stripe distributed among the disks of the array.  In order for this type of array to fail, we have to have a third disk die during the window.  I won’t repeat the entire set of equations, because that would be a pain in the ass.  Basically, we’re adding a new column, called “Second Loss During Window”, which has the exact same formula as the “Loss During Window” one.  The only difference is that the exponential is one less.  Once we get the result of that column, we multiply it with the Initial Loss and Loss During Window to get the real figure of data loss.

Array and # drives Drive Type Annualized Failure Rate Odds of Initial Loss Loss during Window 2nd Loss Total Chance
1 – (1-AFR)eN 1 – (1-(AFR * 0.0384))e(N-1) 1 – (1-(AFR * 0.0384))e(N-2) Initial * Window Loss
Array 1 – 4 drives WD Red 4TB 2.17% 1-(1-.0217)e4 = 8.41% 1-(1-(.0217*.0384))e3 = 0.25% 1-(1-(.0217*.0384))e2 = 0.16% 0.0000003364, or 0.00003364%
Array 2 – 4 drives HGST NAS 8TB 1.2% 1-(1-.012)e4 =

4.7%

1-(1-(.012*.0384))e3 = 0.138% 1-(1-(.012*.0384))e2 = 0.092% 0.0000000597, or 0.00000597%
Array 3 – 8 drives WD Red 6TB 4.19% 1-(1-.0419)e8

= 28.99%

1-(1-(.0419*.0384))e7 = 1.12% 1-(1-(.0419*.0384))e6 = 0.9615% 0.00003122, or 0.003122%

As you can see, even if you’re a lazy bastard your chance of data loss in the window of vulnerability, RAID-6 makes the odds of data loss vanishingly small.

Failure Mitigation

So you had a drive blow out in your RAID-5 or -6 array, and you’re staring at the column of Loss Window now, wondering what to do.

The most important action you can take right now is this:

CALM DOWN.

You haven’t lost data yet.  But by hasty action, you might.  Stop, breathe.  Do NOT touch that array, and do NOT power it down just yet.  If one of your disks has checked out of the hotel, when you reboot the cage, there’s a chance it could “unrecognize” that disk and re-initialize the array, blowing your data into never-never land.

Steps to take here:

  1. DO NOT STUFF A NEW DRIVE IN THE ARRAY AND REBUILD. NOT YET.
  2. If you haven’t done so already, write down your RAID configuration. Include total capacity, disk types, stripe size, drive order, partitions/volumes and any other details you can get.
  3. Can you isolate the array from users? If you can, do it.  Get their IO off the array if possible.
  4. Check your backups and confirm that you have a backup of the array’s data.
  5. Get another volume online that has capacity at least equal to the total used space on the degraded array. One of the easiest methods of doing this is a USB 3.0 drive cradle and a set of SATA drives.
    1. Copy all your data from the array onto this volume and confirm that it is valid
  6. If you can affirm that 5.a is done and good, proceed
  7. Are all the drives in the cage the same age? If so, get replacements for all of them and start a completely new array with the new ones.  Retire the old drives.
    1. Reason for this is that they have all experienced similar wear-and-tear, and they all probably come from the same batch made at the factory – if there is a defect in one, there’s a good chance that this defect applies to all of them. You’re better off just dropping them all and replacing them.
    2. If they aren’t the same age, just note the ones that are, and plan to replace them asap.
  8. Okay, if 4 is good and 5 is good, NOW you can do a rebuild if you feel you have to. I still recommend reinitializing completely fresh and restoring the copied/backed up data, but I also recognize that convenience is a big draw.

Part of the whole debate about the validity of RAID-5 tends to stem from the probability of failure during a rebuild – which can be unacceptably high with old disks of appreciable size (see my section on UREs above).  The argument seems to make the assumption that the array is either not backed up, or is somehow on critical path for general use by users.

Rebuilding an array while live and in production use should be considered a last resort.  You can see above that there is a high likelihood of failure even from reasonably modest size arrays.  The fact that current RAID vendors offer live-system rebuilds should be considered a convenience only at this point.  When we were using 100Gb disks, a live rebuild was a viable option, but that simply doesn’t fit any more.

If your array is in that position – critical path and not backed up – then you have a big problem.  You need to get a backup arranged yesterday.  And if it is critical path, then you should ensure that there is a failover plan in place.  Never assume that just because you have your critical data on RAID that you are totally safe.  You are safer in the case of a drive fail, yes, but you aren’t out of the woods.

Stuff to consider that will help you survive an array failure:

  • Buy a USB cradle or a tape drive that can handle the capacity of your RAID array. Use them religiously to preserve your data.
    • Test them regularly (monthly is good) to ensure that when a fail does happen, you’re prepared to recover.
  • Consider a second array, or a big-ass disk that you can house next to the array, of similar capacity that you can set up on a synchronization system (for example, Synology has a “Cloud Station Server” and “Cloud Synch” apps that can be used to ensure one NAS maintains exactly the same content as the other). That becomes your fail-over.
  • Unless you absolutely have to, do not rely on the use of a live rebuild to preserve your data.
  • If you have room in your cage, add another drive and convert your RAID-5 to RAID-6 to buy you extra insurance against multiple drive failure.
  • Smaller volumes are better than big ones – you can shovel smaller volumes onto a USB drive more easily than trying to subdivide one large one onto multiple removable drives.
  • When filling up an array, buy disks of the same brand and capacity, but mix up who you buy them from or buy them over time to protect you from factory batch errors.

Summary

There’s no “magic panacea” here with RAID systems.  They’re great, they’re effective, and there are simply some things that they do not do.  I hope that I have helped dispel some of the fear about RAID-5 here, and it is also my hope that I have perhaps called attention to any gaps in your data coverage so that you can fill them now rather than wait for the inevitable to occur.  With luck, you can breathe a little easier now, and not be too harsh on RAID-5.

Feel free to write me with any questions, comments, death-threats, or mathematical corrections you might feel necessary.  Meanwhile, happy computing.

Edit 13.08.2018:  I whipped up the figures into a spreadsheet that you can download and use for your own arrays as well.

Edit 10.10.2018:  edited for clarity, and corrected math on UREs.  Also corrected spreadsheet which is linked below.

Array Life Expectancy

 

Posted in Business, Disk Management, Hardware, IT, PC Stuff | Tagged , , | Leave a comment

The USS Enterprise (Refit) from Star Trek: The Motion Picture

Build Log:  Part 16

Well, hi!

First off, I feel an apology is necessary here – I haven’t updated my log in a LONG time, and I should have.  I’m sorry.  Since last we spoke, I’ve had a move across the country to Munich and started a new job with a cool biotech firm, and that’s been occupying a big chunk of my time.  I haven’t had the kind of space or free time to do the Enterprise justice, so I have been doing some Delphi coding and a couple of ocean-going ship models in the limited environment I’ve had.

However, I have a proper man-cave now, and I’ve been able to make some progress, so it’s time for an update!

In trials with that little Chinese MP3 player, I discovered that the power output from a PC USB port is anything but steady, and this resulted in the player re-setting itself at random intervals.  This had a really awful effect on my attempts to code a solution for the Big E, and I eventually scrapped it.  I didn’t find out about the voltage problem until well after I’d soldered together a very nice transistor setup on that board I showed you, and while the player was having fits being plugged into my laptop, I was left thinking I’d screwed something up in my soldering.  As it turns out, I had everything right, but the player just wasn’t up to the job.

So let’s talk about that player for a bit – given its sensitivity to voltage changes, it really isn’t a good choice for a model that has more than one sound.  If you’ve got just one sound or some background noise, it’s probably a solid play.  But for what I have in mind, it just won’t cut the mustard.

Instead, I went looking for, and retrieved, several different MP3 options.  My favorite so far, and one that does work for this job, is this:

All the power of an MP3 player in about 1.5cm square.

It takes a micro-SD card, just like these little players, and is only just barely larger than one.  It’s called a “DFPlayer Mini”, and you can get them dirt-cheap on eBay.  Reference materials are widely available on the web, and they’re very easy to control from an Arduino.

Before I start digging into the Arduino code, I want to go into the soldering and setup of a PCB board with you first (skip ahead if this isn’t of interest to you).

 

 

I’m not a professional electrician, so if you have access to one, take their advice ahead of mine.  I’ve found this set of methods to be very useful, and feel it’s my duty to share.

Get the right size PCB board first – PCB means “printed circuit board”, by the way, it’s not some magic acronym or anything.  You want to accommodate not only your parts, but your fingers too. Try to imagine where it’s going to be mounted (in the model? in the base?) and what kind of height you’ll have available.

I’ll probably design my base with an inch or two of height in it, and since this model is so huge there will probably be some very big empty spaces in the base…so that’s where I’ll stick most of my controlling systems.  The TrekModeler board will go in the secondary hull, so I won’t have a kajillion wires going up-and-down the post.

You’re going to need wire – at least two different colors of insulated wire, and preferably one more spool of non-insulated.  (Non-insulated tends to be a little stiffer, too, which makes it useful for creating false ‘legs’ for soft wires during your breadboard testing.)  These shouldn’t cost you a whole lot, and can be had from Radio Shack, Conrad, or bought online.  Leftover speaker wire can work, but that can be a bit thick for this purpose, and hard to work with.

Some needle-nose pliers are also going to come in handy, although you could probably use a wooden chop-stick from takeout Chinese to get this job done too.  Wire snips are a big help (kitchen scissors work, but are kinda large and you might end up blunting them).

Many electrical parts require properly-metered resistors to go with them, and of course longer wires from the board to reach your lights and so on.  Although it is possible to mount them on a PCB and thread all the “feet” together, I find that using little ‘horseshoes’ of uninsulated wire to connect the parts makes things slightly clearer when looking at the board.  For some reason, I also find it easier to get my parts to sit more cleanly when using horseshoes.

To make a horseshoe, just take some uninsulated wire and wrap it around one point of your needlenose pliers (or a chop-stick) and snip off the excess.  Using those needlenose pliers, you can then grip the horseshoe by its middle and insert the legs into the holes on the PCB you want them in.

Grab the wire with pliers, end flush to one side…

 

 

 

 

 

 

Gripping the wire tight, roll the pliers so the wire folds flush against them at about the right thickness to match the holes on your PCB…

 

 

 

 

 

 

This makes a little “hook”.

 

 

 

 

 

 

Snip off to have even legs.

 

 

 

 

 

 

It’s easy to make quite a few in advance, so they’re ready when you need them.

 

 

 

 

 

 

 

When used, they make a nice clean look.

 

 

 

 

 

 

 

When soldering, a key rule to remember is “less is more”.  You only want to use just enough solder to fill the hole in the PCB and cement the legs of the parts inside it.

Good solder looks like this.

 

 

 

 

 

You don’t want a great big glob on the board, and you absolutely don’t want spill-over to connect other parts of the board (these are called “shorts” – they’re responsible for the term “short circuit”, which is generally an undesirable connection that can potentially damage your components).

Bad solder, on the other hand, can look like that.

 

I went into a little detail in my last installment about how the functions of the program are going to work…but after looking it over I’m going to have to be more detailed than that in order to ensure I don’t bork something up completely.  It’s going to be a rather extensive wiring setup, and I’ll use one other 3rd-party item, a lighting board from TrekModeler (at least, I think I’m going to use that – I haven’t run current through it yet, so at this point I’m assuming that it will do the job I want) to handle the lightup and power-down.

Some of this may get a little tedious, but you really have to be tedious when you’re designing an embedded software program with electronics – once it’s in, you aren’t going to get much chance to change it.  (Where I position the controller will make this easier, but let’s assume for the sake of argument that you aren’t going to have that option.)

Here are the main routines as I envision them:

  • Startup – initialize everything to idle
  • Sleep – put the model to sleep
  • Wake – wake it up from sleep
  • Event – button got pressed, or some other trigger occurred
    • Check to see if event end time is here – if so, switch to idle
  • Reset – shut down anything currently going on and re-intialize
  • PowerUp – perform main-sequence music and lights
  • Power-down – shut down completely
  • BattleStations – Toggle (Red Alert on, or standing down from)
    • Makes Fire commands available
  • FireTorps – torpedoes fire
  • FirePhasers – phasers fire
  • Warp Toggle – go to warp or return from it (sound and lights)

Routines that aren’t going to be invoked by user choice, but will be needed, include:

  • MP3 controls
    • Play (int Track)
    • Pause
    • Next
    • Prev
    • Volume
  • Check PlayTime (This determines how long the model has been active – because of a limit in the clock of the Arduino, it rolls over after about ~50 days. Since most of my tracks are timed to sequence with lights, I need to reset my timing if it has rolled over.)

These functions will revolve around physical components in the model, which consist of lights, and available sound resources, stored on the SD card in the chip’s socket.  The lights, listed out as circuits, are:

  • Warp engine interiors, engine crystals, deflector dish (blue), and the impulse crystal
  • Impulse engines, deflector dish (amber), and impulse crystal
  • Interior lighting
  • Floods
    • Upper primary hull
    • Lower primary hull
    • Secondary hull sides
    • Front engines
    • Rear engines
    • Pylons
    • Neck
  • Nav lights (the slow-blinking ones on the sides and front of the primary hull)
  • Anti-collision (fast-blinking on primary and secondary hull)
  • Shuttle bay
  • Torpedo room (when ready to fire, the launchers show red)
  • Torpedo launchers
  • Phasers

Sound resources on the SD card are:

  • Ambient/idle
  • Warp
  • Torpedo fire
  • Phaser fire
  • Red Alert
  • Startup Music

We’ll now build a “user story” to indicate what the model does in response to user action, and when it is able to do what.

  1. User turns on power
    1. Initialization – turn on the MP3 player, turn on the extra board, enable any lights that are not subject to controls
  2. Arduino program loop begins:
    1. Check sleep timeout – if the model has been awake and no one has touched it in a while, put it to sleep
    2. Check total playtime (50-day runout of counter, make sure we don’t overrun that clock in the controller’s CPU)
    3. Check to see if the user asked for an action
      1. If so, perform that action
    4. Is there an action running?
      1. Has its time run out?
        1. If so, return model to idle state
          1. Basic Power: interior lights, shuttle bay
          2. Play idle sound

The user can ask for certain functions only at certain times – for example, firing a torpedo can only be done after going to Battle Stations/Red Alert.  Here’s the breakout of how those functions relate to one another, and which ones become available when:

  • Initialization – this is the root event on which all others depend
    • PowerUp
      • Play intro music, switch to ambient after done
      • Lighting sequence:
      • Bridge
      • Pylons
      • Engine forward
      • Neck floods
      • Engine aft
      • Aux control
      • 2ndary hull sides
      • Deflector dish to impulse
      • Nav and anticollision somewhere in there
      • Events available:
        • Power-down (reduce lighting to state like just after initialization)
          • This plays the ambient noise
        • Sleep
          • Turns off sound
          • All lights turned off
          • Available functions:
            • Wake
              • This is same as “initialize”
            • Warp on/off
              • Play warp sound
              • Switch impulse engines off
              • Change deflector dish and impulse crystal to blue
              • Start warp engine interior and crystals
            • BattleStations on/off
              • Play Red Alert sound, ambient sound after that
              • Turn on torpedo bay backlight
              • Available functions:
                • Fire torpedoes
                  • Play torpedo sound
                  • Lights to fire tube 1
                  • Play torpedo sound
                  • Lights to fire tube 2
                • Fire phasers
                  • Play phaser sound
                  • Alternate phasers on/off while sound is going
                • Return to normal (I’m also going to include a five-minute timeout so that after five minutes of no action while at Red Alert, the ship will return to normal state)
                  • Returns to powered-up idle state
                  • Powers down weapons
                  • Play ambient sound

To accomplish all this, we’re going to need wiring, software, and some work.

With any wired system, I recommend you first assemble your wiring harness outside of the model, using a breadboard and jumper wires with sample LEDs.  You’ll save yourself a lot of tears if you do this first, because if you don’t, and you make any kind of error, you’ll have to either fix it or live with it, and both of those are painful.

To help, there are quite a few apps available free online that can help you.  I’ve used one called “Fritzing” to help me put together my picture of what will be needed to connect to my Arduino.

I still intend to use a Nano, by the way, but it’s easier to visualize in the Fritzing interface using an Uno.  Here’s what I came up with:

My breadboard layout. I use this to guide myself through the physical circuitry.

Note that I’m only including a single LED attached to the TrekModeler board in this diagram – there are many more, but I didn’t feel it necessary to lay them all out here.  This board also comes with two momentary switches, which are intended to be mounted on the model’s base and can control the power-up and warp-impulse switching.  I’m going to use transistors that are under the control of my Arduino as momentaries here, in effect my software will pretend to be the finger pushing the button.

I also have a little old PC speaker connected to the system for testing.  I’ll change that for a headphone jack when it goes into the real base.

This diagram does not show my phaser setup, which will connect pins A2 and A3 as my phaser controls.

When assembled on a board with the TrekModeler controls all hooked up, the entire thing looks a bit like this:

The whole schmear wired together. I’m still a little afraid to add power until I’ve triple-checked each of these connectors…

Not pretty.  It’ll look better soldered into a PCB, I promise.

I put some push-buttons on the setup which aren’t in the diagram, to enable manual testing.  These are on pins 2, 3, and 4.

If you’re going to code this yourself, have at it!  Just remember to comment liberally and informatively, always focusing on the ‘why’ a thing is there (readers will be able to figure out what is there just by looking at the code).

I’m probably going to end up packaging and selling this thing as a lighting harness for the ship, so I hope you’ll forgive me if I hang onto the code I’ve written.

Next time I do a write-up I will have run through a complete test of this wiring harness and moved it onto a PCB.  I might do a video of that part as well, not sure yet.  In any case, thanks for stopping in again and I’m looking forward to passing on more news as I get back to work on the Big Lady.

USS Enterprise – Build Log Part 15

Posted in Build Log, Model Kits, Sci-Fi, Uncategorized | Tagged , , , | 1 Comment

Why do I choose Delphi?

Why?  Really, why do I?  In a world full of PHP, Java, Visual Studio, Python, Lua, and all these other syntaxes, what is it about Delphi that brings me back time and time again?

First off, let’s be clear – it’s not an exclusive choice.  I still do use Python if I have to code something for a Raspberry Pi, or whatever flavor of C it is hat Arduino sketches are made of.  I do a little C# from time to time when I have to work on a project that is written in it.

But why do I choose Delphi when given the opportunity?

A load of people have asked me this question over the years, and although I might waffle a bit on what I say, the answers are generally the same.  There are several reasons really, but let’s talk about the most important ones.

Legibility

Delphi is easy to read.  It’s a Pascal derivative, and Pascal was originally designed for just that purpose – to be easily read.  From a syntactic perspective, it is very close to English, my native language.

Its handling of framework elements is also quite intuitive.  “Object.Property := value” is very easy to uptake.  Being able to follow sub-objects all the way through the chain in a single line is also quite a simple process, being nothing more than “MyObject.object.object…” and so on.

The editor within Delphi also makes things insanely easy, doing indents for you, syntax-highlighting in your code, and one of the (if not the) best code-completion systems available in the world.  Being able to collapse procedures and methods within the code also goes a long way towards making it very easy on the eyes.

Speed

Delphi is fast.  Really, really fast.  In the days when it first was released, a fast compiler was a major coup in marketing terms – regular C and C++ compilers could take hours to build relatively simple Windows apps…and Delphi could do them in seconds.  Borland used to brag that while most programmers in C++ were afraid to touch the compile command, Delphi users compiled constantly just to do syntax checking.

Today, a fast compiler is not so unique any longer, but Delphi is still quick as hell.  You never get tired of speed, I guess, and I am loathe to give it up.  I know when I do a full build of my app I won’t have to go get a coffee or something just to pass the time.

Smarts

Delphi’s IDE contains a host of useful debugging tools, all bound into the IDE.  Value inspectors, call stack, all the traditional stuff you’d expect from a dev tool, plus something a little strange and fantastic:  the ability to step line-by-line through your code in a fashion usually reserved only for interpreted code.

Add to this conditional breaks, memory-specific commands, being able to attach to running processes, there’s a ton of things here to help you identify what’s wrong with your code.  To top it all off, you can even trace your debugging back into the code that ships with the frameworks in Delphi, just in case there’s an issue with those sources.

Logic

I didn’t realize this when I first started working with Delphi, but somehow the logic of it just worked out almost perfectly.  How things are done in Delphi is an extremely well-thought-out design, that follows a very solid, stable pattern.

Object orientation, as much of a cliché as that might be, is just done right in Delphi.  Java always pissed me off because Sun couldn’t figure out how to get their terminology straight – in their lingo, “class” refers to both the blueprint for creating the object and the instances of the object themselves.  In Delphi, a class is just that – a class.  It’s not an instance.

Objects created in the Delphi framework also follow good encapsulation rules – a combo box is just a combo box.  It’s not an edit box, it’s not a grid, it’s just a combo box.  Same goes for a query object, or a field, etc.  Elements in your app do what they should, and only what they should.

There’s also things like how it handles properties of objects.  Under normal circumstances, an object that is created in other languages/platforms needs to be initialized (have all its properties set), or not accessed until it is.  Delphi self-initializes objects as a part of their makeup – every object created in Delphi has a “constructor” method in which the author is expected to instantiate necessary sub-objects and initialize variables.  Although authors outside of Delphi’s sphere might create objects with bad constructors, this rule has been followed almost religiously within Delphi’s authorship.

What this means to the regular developer is that when they create an instance of an object, it keeps code to a minimum – letting the developer focus more on their app, than on the setup of their controls.

And that’s how it should be.

Corba

Delphi added CORBA support in the late ‘90s and…oh, who the fuck am I kidding?  I despise CORBA.  It’s one of the worst ideas to add to Windows programming since WindowsME.  I just threw it in here to see if you were still reading J.

Components

Components are the little pre-packaged bits of code that show up in your tool box (called the “Tool Palette”) in Delphi.  There are several hundred that ship with Delphi, and those are probably enough to get the job done on whatever app you are building.  If not, there are thousands upon thousands more available on the net.  Chances are, if you need something done, someone’s done it in Delphi.

Deployment

Deploying apps in some language platforms can be a real chore.  Installing a .NET app, for example, requires the entire .NET framework to be installed on the host machine.  Generally not a problem, as most Windows machines already have it, but because we’re in the business of writing software you have to check to make absolutely certain that everything you need for run time is present, and if it isn’t, make sure it gets installed before you run your app.

In Delphi, odds are that unless you included something exotic like a third-party reporting engine, or maybe you’re writing a DataSnap-enabled n-tier program, there’s going to be just one file:  your program executable.  Even if you do have some extra fancy stuff, those will include a very minimal file count (DataSnap, for instance, only requires one extra library to accompany your app).  Delphi really makes deployment a breeze.

Data

Delphi, since its inception, has been great at handling databases of many stripes.  In its current incarnation, it supports something like 100+ data sources straight out of the box (Architect version).  From Excel to Teradata to MS SQL, you can get there from here with Delphi.

Multiple Platform Targets

When I was in the thick of things in ’99-, “Cross Platform” was a big selling point for us that sadly didn’t make a lot of money for us.  We released Kylix (Delphi for Linux) and the reception was a bit lackluster (I could go into the why and how, but I don’t want to talk bad about execs who aren’t here to defend themselves).

Today, though, mobiles are making huge leaps forward in their capacity as a computing device, and “cross platform” doesn’t just mean linux – it includes Android, IPhone, and more than a few other systems which have shown their talents as computing devices.

And Delphi can program for them.

That’s a BIG plus in my book.  You can’t really get that from other tools, and certainly not ones with such a solid and useful development environment or lengthy background.

Designers and Two-Way Development

“Two-Way Development” is a feature that goes way back in Delphi – it used to be called something a little different, but I can’t recall specifically what it was – it’s the ability to program one’s UI both visually through a designer, and in text using the editor.

Delphi’s designers (the palettes on which you “draw” your forms and other visible elements) have always been cleaner and more close-to-real than its competition, which cuts down on the “Garrr, that thing is still three pixels off!” moments.  I really appreciate anything that saves me tedium like that.

As well, the two-way nature of how the designer works (if you want to see what I’m talking about, create a form in Delphi and throw a few controls on it – then hit Alt+F12) is super-useful when you want a quick breakdown of how that form is really laid out.  It’s great for very crowded spaces, or ones with many controls on them overlaid on top of one another.  Using text mode, you can find the control you want fast, and see in an instant what its properties are.

Summary

Look, I could go on and on with a multitude more reasons why I consider Delphi a superior tool to any other on the market today (and really, it is superior – there simply isn’t anything that stacks up against it).  It’s simply better at doing what it’s made to do.

It’s also a bit of a “secret weapon” for me.  When I get pulled to do a job, I bring Delphi out.  Not just because it’s good – but because it helps me be good.  I know I can do far better with Delphi than someone equal in skill to me using something like Visual Studio.  It’s simply a better tool.

 

 

Posted in Business, Development, IT, PC Stuff, Programming, Software, Work | 2 Comments

Delphi Does Data

Delphi

Hi everyone!

Last time we talked Delphi, we talked a bit about frameworks and we built your first Delphi app, a “Hello World” for Windows.  This time around, we’re going to talk about one of the most common uses for programming tools in business environments – data access.

Let’s get down to brass tacks on this first – what is data access?  Put simply, “data access” means the ability of a program to read and write persistent information – info that will be kept (hopefully safely) while the program is turned off, and can be retrieved when turned on again, or when asked for by another program.  Generally, we do this with databases – things like Oracle, MS SQL, MS Access, etc.

It is also worth making a differentiation between “data” and “information”.  In this context, I’m going to use “data” to represent raw data, the kind of thing that might be useful to a program, but generally doesn’t mean anything to a human who doesn’t know the insides of the computer.  “Information,” on the other hand, I’m going to use to represent the stuff that we can put up in front of a user and have a reasonable chance of being understood.

When Delphi was first launched in 1995, it shipped with a series of VCL components that wrapped up the “Borland Database Engine” (BDE for short), which was already a healthy set of connectors to various databases like Paradox, DBase, DB2, Microsoft SQL Server, Oracle, and a few others (if you count ODBC – “Open DataBase Connectivity”, a Microsoft library that enabled vendors to write a connector to ODBC and have coders connect through that – you could have dozens or hundreds of possibilities).  Each of these database types was connected by a library commonly referred to as a “driver”.  So if you had a “database driver” for a specific database, you could access that type of server or group of files.

The basic premise of most data access methods is that a program binds to a general data-access layer library.  That library may have one or more drivers (specialist libraries that know how to connect to a specific database type), and each driver knows how to operate with its own specific data repository (be it a SQL server of some sort, or a file-based database like FireFox, Paradox, etc.).

Conceptually, it could look something like this:

 

 

 

The job of the programming tool is to make this diagram basically invisible to the programmer (unless he/she is specifically wanting to write code that does this kind of work.  We aren’t doing that, so we want something that encapsulates this sort of function and makes it pretty painless to do.

Fortunately, Delphi was designed almost from the outset to do just that – and not only to make it painless, but fun.  Delphi was the first programming tool to offer its programmers the ability to see live data in its designers as well, which was extremely valuable in making sure one was coding correctly.  No one else had ever done it like this, and it was several years before anyone else could offer something similar (a short-lived and extremely bad programming tool called “PowerBuilder” did, and eventually Microsoft figured out a way).

As I mentioned, when Delphi first shipped it had components that wrapped the Borland Database Engine inside it.  Since then, quite a few additional ‘engines’ have been added (I’ll use “engine” to term the library that offers up multiple drivers), expanding Delphi’s potential database access even further.  The BDE is still in there, but it is rather old and doesn’t shine quite the way it used to – it has been deprecated, which in general means it is no longer supported and no new code is being done to improve or expand it.

In addition to BDE, the following engines are included:

Interbase Express – connectors specific to Interbase, a SQL Server offered by Borland/Embarcadero/Idera (I can’t keep straight which company kept what part).  Interbase is a fast and compact database that is particularly good for bundling in with applications, though expensive in its deployment costs.

dbExpress – this is an engine that surfaces drivers for Sybase’s SQL Anywhere, regular Sybase, DB2, Firebird (an open-source fork of Interbase), “IBLite” (an even-more-compact version of Interbase), Informix, Interbase, Microsoft SQL, ODBC, Oracle, and SQLite.  It also offers a connection to DataSnap, which is a programming framework for making multi-tiered applications.

DBGo – components that harness ADO (a successor of sorts to ODBC).

FireDAC – FireDAC is a modern iteration of the multi-access engines produced in prior generations of programming tools.  It presents a common interface to dozens of different data repositories and storage methods.  Among them are:

  • MS Access
  • MS Excel
  • DBase
  • Paradox
  • FoxPro
  • ODBC
  • dbExpress drivers
  • Ingres
  • Nexus
  • DataSnap servers
  • Firebird (embedded and normal)
  • MySQL (embedded and normal)
  • SQLite
  • MS SQL
  • MS SQL Azure
  • MS SQL CE
  • InterBase ToGo
  • InterBase
  • Advantage Database
  • PostgreSQL
  • Sybase SQL Anywhere
  • Informix
  • Teradata
  • DB2
  • Microfocus Cobol
  • Oracle

So…as you can see, FireDAC isn’t joking around.  It connects to a LOT of data sources.  If yours isn’t listed there, it can probably still be reached through an ODBC driver.  (Of course, if it isn’t in there, it probably isn’t worth programming for J.)

Within Delphi, data access is done through a series of components (not necessarily VCL ones, but they all work roughly the same).  First, a connector component representing the program’s access to the database server or location is used, and after that one or many components representing the various bundles of data within that location are set up to enable the program to read and write to them.  Finally, components responsible for passing that data into visual formats are used, converting the data into information.

In the 10.2 (“Tokyo”) build of RAD Studio, the BDE has been removed – it was deprecated long ago, and finally has been pulled.  If you’re maintaining an old version of code that does still contain these, you can retrieve an installer from Embarcadero’s site (here: https://cc.embarcadero.com/Item/30752), but that’s the only case where I’d recommend you do so.  For future use, it’s best to get yourself into one of the more current sets of components.

For starters, let’s take one of the simpler ones, dbExpress, and connect it to a Microsoft SQL Server installation.  As it happens, I have a dev edition of MS SQL here on my laptop, so we’ll start with that one.  You’re going to need to install at least the MS SQL client software on your system before we get started (the client is also included in the server installation if you’re going to put a full server on your machine).  If you’re getting into software development, I’d really recommend you buy a license of the MS SQL Developer Edition (available here:  https://www.microsoft.com/en-us/sql-server/application-development).  It’s a fully-functional server, and is fantastic for working out issues prior to testing against a real server.

Let’s start a new project.  You had the basics of this in my last Delphi article, so go ahead and roll one out.  A blank form is just fine.  I’ll do one here too, a VCL forms app for simplicity’s sake.  I’ll target Win64 again as I did previously.

When Delphi was first launched in 1995, people used it a LOT for database access.  However, even though the data access components are really small, they tend to collect quickly and can really clutter up your designer.  As a solution in Delphi 1, most programmers just added a new form to the project and put all their data components on it to avoid getting their UI out of control.  Borland (the original maker of Delphi) recognized this as a pain point right away and in Delphi 2 released what is called a “data module” – a non-visible form (so it wouldn’t use as many system resources) which can host all manner of non-visual components like data access stuff, API components, and so on.  That’s what we’ll do here too.

Once your project is ready, and you can see your designer with Form1 loaded, go to the File menu.  In there choose File > New > Other… and in the dialog that appears, select “Delphi Files” from the tree view on the left.  The right pane will have a list of choices, one of which is “Data Module”.  Select that and confirm by clicking “OK”.

Thar she blows!

Notice your Project Manager now shows you have “unit1.pas” and “unit2.pas” as part of your project.  Unit1 is your main form, and Unit2 is the datamodule.  You should probably save and name your files now, to stay in the habit J.  Go ahead, I’ll wait.

 

 

 

 

Saved it?  Okay, great.  Notice the Data Module looks like a blank form, but it has no title bar, no icon, etc.  That’s because it will never appear visually within your application.  Your visual form will use this Data Module, referencing it so that it can get a grip on the components present within it.  To do this, return to your main form, and from the menus choose File > Use Unit.  You’ll see a list with your datamodule in it.  Double-click and you’re on your way.

First thing we’re going to want is to go to the Tool Palette and open up the “dbExpress” group.  The starter is the TSQLConnection, which will represent a persistent connection to our database.  Grab one and drop it on your Data Module.  The new connection will default to a name of “SQLConnection1” – go ahead and rename it to “MSSQLConnection”.

Our next step is to designate a driver for this component – choose “MSSQL”.

By doing this, the component will fill up its “Params” section with a series of values that it will need to operate.  Most of these you won’t have to touch or bother with.  The two you will need to set are “HostName” and “Database” – the host name will be the name of the server to which you are attaching, and the database is the actual name of the database on that server.  For hostname, I could give it the full name of my SQL instance (I’m assuming you went and picked up the Dev edition of MS SQL I mentioned above), but since I’ve installed it on my development machine, I can use “.” as the machine name.  Each instance of SQL Server gets its own name too, so that is a two-parter.  It will look like this:

[MACHINENAME]\[INSTANCENAME]

So it would look like “MYSYSTEM\SQLONE” or similar.  Since I’m running locally, I’m going to sub “.” for my machine name, so my Servername parameter reads as follows:

.\THEOSQL

The Database parameter is quite literally the name of the database you intend to connect to (Adventureworks is the sample data that MS has always shipped with their product, so you can test with that, but I’m using a home-grown named “SampleData”).

I’m also going to change my “MaxBlobSize” for my own purposes – don’t worry about this.  Leave yours as -1.  If you know what this is for, you can deal with it on your own terms, otherwise it’s not important for this lesson.

Params, check.

Once your params are set, you can test them by changing the “Connected” property to “true”.  You’ll be prompted for a name and password (you did remember to store your login credentials somewhere, didn’t you?), and if you give proper credentials, it’ll change to true.  That confirms that you have a live connection to your database.

Once you’ve confirmed this, go ahead and set it back to “false”.  Leaving a connection on in the designer is setting yourself up for a few problems later, and it’s better to handle it in the program at run-time.  We’ll get back to this shortly when I show you how to get live data showing up in your app.

So…we have a connection to the database, but we don’t yet have real data.  Let’s set that up next.

I made up some data in SQL Server, so we’ll have something to look at.

For this, we need a dataset component.  Where a “connection” represents a channel to the database server, a “dataset” represents a channel to a specific package of data (which might be the contents of a table, the output of a SQL query, view, or stored procedure, etc. – basically anything that can be considered to have actual data in it).  In the case of dbExpress, this means a TSQLDataSet, TSQLQuery, TSQLStoredProc, TSQLTable, or TSimpleDataSet.  Since we’re dealing with MS SQL, let’s keep it straightforward and use a TSQLQuery.  This component represents a query you write and store inside the component, and when it is opened, it fires this query off to the server, then makes the response from the server available to your app.

Grab a TSQLQuery and drop it on the datamodule.  Rename it from “SQLQuery1” to something more meaningful, like ‘qryProducts’ (in my case, that’s what I’m doing, because I’ve got some sample data in a “products” table).

Check out the properties of your query object.  There’s a couple of interesting, and a couple of necessary, elements here.

On the necessary front, “SQLConnection” needs to be set – because your query needs to know which database to ask for its information.  Some apps connect to multiple database servers, or in different ways to the same one (for example, as an admin or as a user) and that would mean multiple connection objects (potentially one object with multiple settings that change at runtime, but it’s easier to manage in code with two separate connection definitions).  In our case there’s only one, so click the drop-down in that property and select our connection.

The next and final “necessary” one is the SQL property.  This is a “TStrings” object, which just means it is a list of string values.  That list can be a multi-line SQL statement, but we won’t need more than one for this.  We’re going to open up the strings editor (click on the ellipsis button in the property), and enter the following SQL statement:

Select * from Products

You can now test this query, by changing the “Active” property from False to True.  Again, you’ll be prompted by the program for a username and password (because the query will automatically open the connection, and the connection will want to authenticate you).  Once it goes true, set it back to false and set the connection’s “connected” property back to false as well, because it won’t do that all by itself.

All set with the query…

At this stage we’ve got a connection that can go live, and we’re retrieving data – so if all we wanted to do was manipulate the data or check a value with our program, we’d be good to go.  However, we want to actually show off the information a little bit, so we need some data controls on the main form of our app.

dbExpress is a little quirky, in that it operates on “unidirectional” datasets – as implied by this sort, it’s a one-way thing.  The DBGrid, which we’re going to use shortly, requires a two-way connector.  So to get around this, we’re going to insert a little “spoof” on it by pulling our results into a locally-held two-way dataset, called a client dataset.

Although for the purposes of this writeup we’re tricking the dbExpress stuff this way, I need to point out that in a real-world situation the feature that we’re bypassing like this is actually insanely useful.  The ClientDataset is designed for creating n-tier applications.  In the early days, apps were generally “desktop” and “client-server”, with workloads either entirely on the single user’s PC, or split between a client and a server.  Towards 1997-2000, a revolution happened that added a third option:  distributed computing.  We look at it now as just the norm, but at the time it was brand new and a very big deal.  N-tier means splitting your app’s work up among multiple computers (hopefully in a logical fashion) so that more work could be done faster by the app.  This later morphed into a wide variety of distributed architectures (like “Service Oriented,” etc.), but the premise here is that you’d have a server responsible for hosting persistent data, an app that ran apart from it but which was responsible for retrieving that data (and perhaps performed validations on data sent back to it, etc.), and a client app that not only showed and manipulated that data, but also was able to run in a disconnected environment on a “suitcase” model for the data.  When connectivity is re-established, the briefcase ships its changes (called a “delta packet”) back to the server for handling.

That’s what the ClientDataset does.  Very cool component.

Let’s get back to business, though – to feed data to a ClientDataSet, you need a DatasetProvider.  Drop one on your datamodule, and set its name to something that will make sense to you (like “dspProducts” or something).  Next set its DataSet property to your query.  As you can probably guess, the “DatasetProvider” provides a DataSet to ClientDatasets.  Which, surprisingly enough, is what we need next.  Go ahead and stick one on the datamodule and set its name to “cdsProducts”.  Next set its “ProviderName” property to the name of your DatasetProvider, either typing it or via drop-down.

“She might not look like much, but she’s got it where it counts…”

Lastly…

Delphi doesn’t include display elements in its datasets, because the philosophy behind a lot of Delphi programming is “If you don’t need it, don’t include it.”  Datasets are for retrieval and manipulation of data, not its display.  To add the ability to display to the mix, you need a component called a TDataSource.

Grab one from the Tool Palette and drop it on the main form of your app.  Rename it to “dsProducts”.  The job of this component is to relay the data from your datasets to visible data controls on your forms.  This control is a bit limited in its scope, but it does have several useful features that when you get into programming seriously, will be extremely handy to have around – in particular, when a user of your app makes changes to the data in a form, you can insert routines that can look over the changes they are about to make, and perhaps abort them or pause the user if what they are about to enter is questionable or invalid.  We won’t get into that here, but just be aware that’s what that component is good for.

Since our main form “uses” the datamodule, it will have visibility on what components are available there – namely the Query we put on it a few minutes ago.  If you go to the Data Source’s “DataSet” property and choose the drop-down, you should see the Query from the datamodule listed there.  Select it and let’s move on.

Next thing, let’s keep it basic, will be a DBGrid.  A Grid is just a row-by-row display of all the columns in your dataset (the grid itself has a lot of customization features to it as well, but for now we’re going to just make it a clear window on the data).

Slap a DBGrid onto your form, and assign its dataset property to the dataset you created a few moments ago.  That’s really all you have to do.

Slap-bang on the form

Ready to test something cool?  Set the ClientDataSet’s “active” property to “True”.  If everything is wired up properly, the DBGrid will populate with data from your table – in the designer!  It’s able to do that because Delphi’s IDE is, itself, a running Delphi application.  This was a huge development back when it launched, and for many years afterwards, because there wasn’t any other dev tool that could pull that little trick off.  And when you’re building a data-driven user interface, there is nothing better than viewing it with real results.

Ta-da! Yeah, baby!

Go ahead and set the Active property of both the ClientDataSet and your Query to false again (the CDS will have switched it on) and the Connection’s “Connected” property back to False.

Finally, while we’re in here, let’s put a button next to the grid for turning the data on and off.  I’m going to show you something a little bit fancier than our standard controls in how we’re going to write code around that, as well.  Instead of writing a control that directly grabs the query and turns it on, we’re going to follow the chain of references in the components.

Once your button is on the form, double-click it to create an “OnClick” event handler.

In that handler, write the following code:

if grdData.DataSource.DataSet.Active then

begin

dmMain.MSSQLConnection.Close;

btnDataToggle.Caption := ‘Open Me’;

end

else begin

grdData.DataSource.DataSet.Open;

btnDataToggle.Caption := ‘Close Me’;

end;

What this translates into is that we are looking at the grid’s datasource, checking its dataset, and if that dataset is currently “Active” (open), we close its connection, closing the dataset too.  If it happens to be closed, we open it.  In either case, we change the text of the button to represent what pressing it again will do.

You’re all set now – you can take that executable you just built and use it on pretty much any PC that has a SQL Server client on it, and a valid link back to your chosen server.

Play around with the various kinds of controls here – there are a great many data-aware elements you can goof around with.  For me, I’m going to go for a while, and next time I’ll write up some examples of other methods of accessing data – ADO and FireDAC.

Until then, have fun!

 

 

Posted in Development, IT, Programming, Software | Tagged , , , , , , | Leave a comment

A Delphi Primer with RAD Studio 10.2

Hey hey, everyone!

Today is a day off for a public holiday – couldn’t tell you which one off the top of my head – in Germany.  They do a few of these in May and June, and this time around one of them fell on this Thursday.  I’ll be back in the office tomorrow, but I wanted to take a few hours to put this together as the real start of my “refresher” with Delphi.

A little personal background – the last version I did any serious work with was in 2009, while working for a company that did an automotive dealership ERP system.  They were working with Delphi 7, and had started looking into Visual Studio, but at the time VS was still insisting a lot of repetitive code for data access and it just pissed everyone off.

So – I went on after that and my memory of Delphi at that time was of the “old school” IDE (“IDE” = “Integrated Development Environment”, which just means the app that is designed to host your programming effort and the tools that accompany it – as opposed to picking up a bundle of various unrelated stuff and working haphazardly with it), we still had BDE, and all that.  I played around a little with XE3, but not enough to be re-proficient with it.  So let’s change that a bit, shall we?

I’m going to approach this as if I was a beginner, who has just signed on with a firm or just bought my own copy and plugged in my license key.

As it happens, I’m currently staying (temporarily living) in a teensy little cave of a one-bedroom apartment, and where I’m sitting right now I don’t have WiFi.  As a result, I got a battery of errors when I started up – everything was trying to run scripts on the page, but none of the failures were of a fatal nature.  I won’t detail those, because it’s entirely possible those were my own doing from a prior project.

In the “old school”, the first thing you’d get was a form designer, an object inspector, a VCL bar across the top, and probably a project manager.  None of these were docked, it was a multi-window app (“MDI” in the terminology of programming, which I guess you’d better get used to – MDI means “multiple-document interface”).  This was largely because of Delphi’s origins as a construction site for Windows GUI applications – it later grew into much, much more, but kept to its heritage in how it presented itself with a first impression of “let’s build a Windows app”.

Here, we get a single-window app (an “SDI” – “single document interface”) where all the tools are docked to one another.  The most prominent visual that pulls you in is an almost-center section of ‘new’ options for making new projects:

So…as a new user, the first thing I’d like to see is “Getting Started”.  Except…I’m in an unusual state of not being connected to the Internet right now, this requires an internet connection to get to, so maybe we’ll come back to that.

I guess we’ll just have to muscle through on our own.

So as a programming system, most tools revolve around a concept of a project, or an app, or some similar goal-oriented thing which will eventually be built and run independently on a computer of some kind.  In Delphi’s case, it’s a “Project Group” now (used to be just a “project”, but around Delphi 5 time frame most apps built for business in Delphi had multiple independent components that talked to one another and shared work across platforms in something like n-tier architecture, or via Web Service in Delphi 6, etc.).

Choosing “Create a new Project” from the screen selections here results in a dialog helping you to determine what kind of project you want:

I’ve never been a huge fan of C++ syntax, and since a beginner is going to be our target audience, let’s go with a Delphi app.  Highlighting “Delphi Projects” presents a list of various choices in the right-hand pane…

Some of these sound damned cool.  An Android service?  I’ve got an Android phone…as well as an NVidia Shield on my TV.  But that’s a little advanced for right now, so let’s stick with the basics.  Console apps are for black-and-white command-line stuff, generally with no UI, and that shit’s for Linux chumps J.  We’re programming for a Real OS, Windows.  Something people use daily.

Which reminds me – what are we going to build?  One of the things that people who want to learn to program generally don’t think about is the answer to that question.  “What the hell am I gonna make with this thing?”  That was one of my big stumbling blocks too, way back in the ‘90s when I bought my first copy of Borland C++, its fifty 3.5” floppy disks and its forty pounds of books.  I played around with that thing, but that’s all it was, I was playing.  I didn’t have the foggiest clue what to build then.

So let’s make up our minds.  All programming starts simple, let’s pay homage to the classics and do an old-fashioned “hello world”.  We’ll look at the parts of the development environment that help us do that, and I’ll highlight some advantages we get in Delphi that don’t really come easy.

The project that fits this description best for my purpose here is a VCL Forms App – as long as we’re going back home, let’s do it old-school style.

VCL means “Visual Component Library” – and it means the framework of pre-built stuff that is in the development system.  Frameworks are what drive programming systems.  Without a framework, all you’d really have is a compiler and text files, and you’d have to build quite simply everything by yourself.

And that would suck.  A lot.

So about five minutes after the first programmers started writing programs that could be stored on something other than a gigantic deck of #$%&ing tarot cards, they started writing frameworks.  Frameworks are pre-packaged blocks of code that represent things which get built a lot.  For example, take the humble little button on your screen.

Buttons get used all over the place.  So do things like text boxes, labels, even the window itself gets used quite a bit.  A good framework will have pre-built code that contains these things, so you the programmer don’t have to re-invent the damn wheel just so you can write “hello world” or something.

Delphi in its original state had VCL, and that was it.  Borland C++ had OWL (“Object Window Library”), Microsoft had MFC or something for its C++ side, Visual Basic had a bunch of “OCX” controls (I can’t remember if I ever even knew what OCX stood for).

I’m going to take a long tangent here.  If you want to dive in with the “hello world”, jump down a ways and skip all this talk about frameworks and libraries and stuff.  This info will be useful though, if not now, then later.

About Frameworks…

Originally VCL was one great big fat chunk of code that would get pulled into your app and you’d have a 600k executable that would pop up your window and say “hello world”.  At the time, that was pretty freaking huge, despite being really fast.  By the time Delphi 2 was released, it was even bigger, and because we had a forest of third-party controls one could buy, every one of those controls would have to be re-compiled into the original VCL code, which promised to become a real spaghetti mess.  About two years – and two product versions – after the v1 release, the VCL got split into many smaller interoperating chunks we called “packages”, so your 600k .exe file shrank down to a svelte 80-120k or so, and more importantly all those 3rd-party systems became self-contained bundles that didn’t threaten to corrupt the core VCL.

Another benefit to this was that it kept compile times reasonable.  When Delphi crashed the party in 1995, Windows apps were largely C++ stuff – which includes Visual Basic.  VB was built in C++, and itself was not a “compiled” app until much later.  The C++ compilers at the time, running on PC processors, could take hours to build an app and put it all together.  In the case of really big ones, it might be days.  Delphi popped up, and all that changed – Delphi could compile an app in seconds.  Usually in the single- or low-two-digit seconds.  And Delphi’s IDE was written using the Delphi compiler, and all million-odd lines of code in that could compile in minutes.  That was a really big deal.

Oh yeah – let’s talk for a sec about what “compiled” means.  Compilation at its most basic level means taking one kind of code and converting it into another.  Usually that’s in the context of taking something a human wrote and turning it into something that a PC chip and operating system can understand and act on.  This is in contrast to “interpreted” or “scripted” code, where something reads the instructions written and simply performs the actions described in them.  In a compiled app, the compiler reads your code, and it builds a self-contained output ‘thing’ (in our case here, an executable file), and then it goes back to bed.  The output is the actor, and it has within it all the instructions you gave, ready to go.  In an interpreted app, an interpreter holds your code in the form of a “script” and acts on it, line-by-line – it’s the interpreter that does all the action.  As a result, it is both slower and more limited in its possible actions.

Today, with the current crop of processors, compile times aren’t that big a deal any more.  When you can throw six or eight cores with possibly two threads per at a compiler, there’s not a whole lot out there which will take a great deal of time to build.  Similarly, most interpreters, despite being a bit clunky, operate like they’ve mainlined about a kilo of coke on a modern processor.  They’re still limited in a lot of ways (for example, one comes to mind which doesn’t support using all those cores and threads – it still just fumbles along running one process per CPU), but the big differential between compiled and interpreted isn’t quite the gulf it used to be.

Back to building our app, and the frameworks involved.  Today, pretty much everything Microsoft’s environment works with is “.NET”.  They have what’s called the “.NET Framework”.  Very original.  (For quite some time before and well after release, Microsoft had some serious communication problems – most people inside and out simply didn’t understand what ‘.NET’ was supposed to be about.)

Delphi has VCL – and several other frameworks.  These all appear as “components” which can be dropped into an app, “wired” together with settings and code, and will be compiled into it when you tell the IDE to build your program.  VCL has been, and probably always will be, largely about Windows.  The framework makes the creation of windows and controls by calling functions in the Windows operating system which themselves call functions in the hardware of your PC – the disk, memory, chip(s), video card(s), etc.

It also has “RTL” – the Run-Time Library.  RTL is a set of non-visual code bundles that encompass common operations not necessarily involved with building visual applications.  A good example of an RTL unit is “Math”.  There’s a unit actually called math, which is full of functions and procedures that all revolve around mathematical operations like rounding, modulus, sine/cosine, and even things like figuring out payment schedules, net present value, future value, etc.  I specifically call this out because one of my earliest self-designed apps used a lot of geometric functions to produce 2-dimensional graphics on demand, and I ended up reinventing a lot of Euclidian geometry in code to accomplish it – I was ignorant of the math unit in the RTL.  If I’d known about this thing, I’d have saved myself weeks of coding time.  Much of RTL doesn’t hinge on the Windows system, but is independent of this.  I mention that, because while Visual Studio is entirely about Windows…

Delphi doesn’t just do Windows any longer.

Delphi also has “FireMonkey” (don’t look at me, I didn’t name the thing), which is aimed at cross-platform programming – which means stuff built on the FM framework can run on Windows, iOS (Apple’s OS for phones and Macs), and Android.  There aren’t quite as many components in FM as in VCL, but there are plenty to get the job done.  And what you build for one, can then be built for each of the others – so a “hello world” for Windows can also be compiled and delivered to an iPhone and an Android device without changing your code.  Of course, if you use platform-exclusive functions (for example, Microsoft SQL Server might be your data repository), you will limit the cross-platform nature of FM to just that platform, so keep that in mind.

Other, smaller frameworks that Delphi makes available include

  • EMS “Enterprise Mobility Services”, which links to the ‘mobile enterprise application platform’, which I’m not at all familiar with and smells strangely of CORBA. Basically, if you’re starting up in Delphi and you don’t recognize what this is, avoid it.
  • DataSnap, a framework that enables you to divide your application’s working parts among multiple packaged applications/libraries, all of which can then connect to one another and trade data or invoke each others’ functionality, either on the same PC or spread out over a network.
  • Web Broker, a set of components that enable your apps to become web server extensions and generate content in the form of HTML or XML documents as responses to being called over HTTP.
  • IntraWeb, an app framework that enables you to cook up web apps with a visual interface.

In addition to these frameworks, Delphi also enables you to make calls directly to your platform API (which means making calls directly to the operating system of the computer on which you’re running) either as a straight-through call or by using pass-through calls that are contained in code units supplied with the RTL.  (These units don’t quite amount to a library, but they are provided to simplify making the connection to the operating system.)

Back to the application…

So, where were we?  Oh yeah, we were going to do “hello world” – and courtesy of all the work that went into building the VCL, it’s going to be dirt-simple to create the window, complete with a button and a pop-up dialog that contains our message.  We’ll do it with only one line of code – without the framework, it would be thousands of lines.

From the center pane of the IDE, choose “Create a new project…” and choose “Delphi Projects,” then “VCL Forms Application”; or alternatively you can use the menus File > New > VCL Forms Application…

Whichever route you chose, you land here:

I’m a bit of a stickler here, and despite my old-school start, I noticed over on the side there it says that my target platform is 32-bit Windows.

 

 

Well, to be truthful, I haven’t had a 32-bit CPU or operating system in my house since what, 2006?  Maybe 2007?  So let’s change that and add Win64 and make it our target.  This laptop is on 64-bit Windows, and that fits my goals just fine.

In the Project Manager on the right of the screen, right-click the “Target Platforms” entry and select “add platform” from the menu that appears.  You’ll be rewarded with this dialog:

Once you OK this, 64-bit Windows will be added to the targets, and it will be made “active” (it’ll be bold in that list).  Active means that when you compile, the executable that gets built will be for the target platform currently bolded.  I’m going to remove the 32-bit windows target, just because it clutters up my space and I don’t have anywhere right now where I need a 32-bit app.  Right-click on the unneeded one and delete it if you want to do the same.

In times gone by the VCL would be stretched out across the bottom of the menu bar as a set of square icons in a tabbed interface at the top of my IDE.  This got a bit unwieldy towards the Delphi 8 time frame (mid 2000s), because a vanilla install would end up with more than a dozen tabs, easily.  Since then it has grown to something like fifty categories, and there’s no really good way to present that many options in a GUI interface.  The current version packs the VCL controls as well as a bunch of 3rd-party and multi-framework options into a long expandable list called the “Tool Palette,” currently found at the bottom right of the IDE.

What we want is just a button…so how do we find it from among all these things?

Happily, that’s going to be easy.  There’s a ‘search’ box at the top of the Palette, just type “button” into that and see what pops up in the list.

TButton is what we want.  Either press “enter” or double tap that with a mouse, and Delphi will drop a TButton right in the center of your app window.

It deserves note that the “T” at the beginning of pretty much every component Delphi has ever seen stands for “Type”.  It represents a class of object – I’ll give you the broad rundown on “object” and “object orientation” some other time – and the class is what defines the object.  Think of a “class” as the same sort of thing as a “blueprint” or “recipe” or “design”.  It isn’t the object itself, but instructions of how to make that object.  Java totally fucked up in naming both their recipes and their existing elements classes, and that has caused who knows how much confusion for beginners over the years.  But that’s Java, and it hasn’t ever made a whole lot of sense outside of “how can we get Windows programmers to build stuff to run on Sun boxes?”  Another day for that, my prejudice is showing.

So “TButton” is the class which is used to create buttons.  Delphi dropped one on our form.  What now?

Now, we configure the button to look and feel the way we want.  To do this, we can drag it around the window, we can also grab its sides and corners to resize it.  Go ahead and do some of that.  I’ll wait.

All done?  Okay, when we want to change some of the more nitty-gritty bits about the stuff in our visual designers, we need the Object Inspector.  By default this is located in the lower-left corner of the IDE.  It shows you the properties and the events of the currently-selected item on the form.

(By the way, “Form” in Delphi terminology represents a “Window” of an app – so when you’re working on a “form” you’re working on what amounts to a window.  There’s some extra nuance to this, but for now when you’re starting it’s best to think of it that way.)

Select the button, and look at the Object Inspector.  A whole bunch of properties of the button are listed there.  “Properties” of a thing in the Delphi world are the qualities of that thing, the settings that make it look and feel and behave the way it does.  In the real world, things have properties too – your shirt for example, has properties “material,” “color,” “sleeve length.”  These properties for my shirt are “cotton,” “black,” and “short”.  The descriptor of the attribute of a property is called its value.  Think about the properties of the things around you.  How would you describe them to someone?

Back to the button.  Its most commonly used property is “Caption” which is the text that appears on the surface of the button shown to the user.  By default, the caption of the button is the same as its name.  We’re going to change these values now.

Change the property Caption to “Say &Hello”.  Notice that as you do, the form shows your change in real-time.  Also take note that the “&” didn’t show up – instead, the next character, the “H”, got underlined.  This indicates that the H will become a ‘hotkey’ when your app runs, and in addition to pressing it with a mouse click, the button will also respond to the key combination Alt+H.

Find the property “Name” in the Object Inspector.  By default components added to a Delphi app will be named as their class plus a number, iterating the numbers based on how many others of that class there are.  So we might see “Menu3”, “Button12”, and so on.  When you’re writing code, these default names are really hard to cope with, so get in the habit of naming things sensibly – try to keep in mind what they do and what they are, because your code can’t “see” when it is running.

For now, rename this button to “btnHello” – btn being an abbreviation of “button” and Hello telling us what this thing is supposed to do.

Do you see, in the top of the Object Inspector, there are actually two tabs there?  Properties and Events?  “Events” are things that happen to your objects, like mouse clicks, keyboard presses, and so on.  Take a scroll through the available events of a button just to get a feel for what they might be.

Each component will have a default event, and if you double-click on the component in the form you’ll automatically create an event of that type, which will be assigned in the Object Inspector to that event.  It basically opens up a little “hole” for your code to appear in, and when that event happens your code will be executed.

Double click either on the button itself, or in its “OnClick” event in the Object Inspector.  Both actions will result in the same outcome, you’ll end up in the code editor, your cursor itching to write a bit of code for a routine entitled “btnHelloClick”.

In that line, enter the following code:

Showmessage(‘Hello World’);

Capitalization doesn’t make any difference in the Delphi world, I just use it to make things look a little more sensible.  “ShowMessage” is a little routine from the VCL “dialogs” library that will take a single string value (strings are characters) and pop a dialog up with that string in it.  Our app is already referencing the dialogs part of the VCL, so we don’t have to add anything to this.  Our app is ready to run!

Up in the menu bar, there are two buttons with green right-pointing arrows that can be used to run the app – “run” and “run without debugging”.  Either one will work, but as a developer you’re going to want to use “run” most often, because debugging is what developers spend a very great deal of time doing, and your users won’t like it if you never debug your programs.  Fortunately, we won’t need any debugging in this app, because it’s very simple and you will get it right the first time.

Press “run” or hit the F9 key (same thing).  Delphi will compile your app – you’ll see a dialog indicating its progress – and will run it.  You will end up with this:

That’s your app!  You just wrote, compiled, and ran your first Windows application!  Congratulations!

Go ahead and press the button, see what happens.  I’ll be here.

Up pops a centered little dialog saying “Hello World”, right?  Pretty cool.  Note while you are pressing this, if you used “run” rather than “run without debugging” that at the bottom of the Delphi IDE there is a pane called the Event Log that registered things like “thread exit,” “thread start,” etc. – these are part of the integrated debugger, which we won’t talk about in this article, but is a really super-cool and very useful tool for when you’re writing more complex programs.

Go ahead and close down your app – you can either press the “X” button on the top-right corner or enter the Alt+F4 key combination, both will close the app and return you to the IDE.

Last thing for this article, let’s save this project.  It’s not a big deal to lose this, because it’s so simple, but saving it will let you know what files we’re dealing with.

There’s a couple of save buttons on the top of the IDE, one disk for “save” and two disks for “save all” – those are disk icons, by the way.  For the younger readers, those look like what we used to use to transfer data and programs around on, called “floppy disks.”  Ask your parents what they were.

Either choose the two-disk “save all”, or go to the menus and choose File > Save All.

Your first prompt will be to save “Unit1” – we didn’t rename this unit, because it’s the only one in your project.  Later, when there are more units in more complex projects, you’ll want to rename units as soon as you create them, for the same reason you rename components when you drop them on a form.  If you have a directory where you want to keep your code projects, navigate there and save Unit1 in the directory where you want to keep your Hello World project.  Note that if you don’t keep your projects separate, you’re likely to overwrite files and lose your work, so definitely use different directories for each project.  Get in that habit now.

After being prompted for Unit1, you’ll be prompted to save “Project1” as well.  Project1 is the name of the program you’ve just written, and Delphi will name its executable the same as your project file name.  So if you want to call this something else, now’s a good time to change its name.  “HelloWorld” would be a good, if not very original, title J.  (Delphi doesn’t like spaces in project names, by the way, that’s why there isn’t one in my suggestion.)

We’ll talk about what all those files are another time.  What’s important is that now when you compile your app (go ahead and do so now that you’ve saved the project – Ctrl+F9 will compile it, or you can go through the menus and choose Project > Compile HelloWorld), the executable will be found in this directory, in a “Win64” subdirectory.

We’ll deal with these…later

You can take that executable and run it on any 64-bit Windows computer you want now, it’s all yours and you get to do with it whatever you want.

 

That’s the one that matters.

 

 

So you’re done – you’ve built your first app in Delphi and you’re ready to tackle the world!  Congratulations again.

I think the next one of these I do, we’ll do a little bit with some data, and tap into a Microsoft SQL Server.  But for now, I’m going to go have a beer and build a model or something.  I’ll raise a glass for you, and hopefully I’ll see you next time!

 

 

Posted in Development, IT, PC Stuff, Programming, Software | Tagged , , , , , | Leave a comment