An Open Letter to Senator Richard Burr of NC, 18.10.2019

This letter was submitted through Burr’s own website a short while ago. I invite you all to write similar letters to your Republican Senators, and to share this liberally among your friends and acquaintences:

Okay, you buffoon.

I warned you about this over a year ago. I told you that you could be the “voice of reason” in opposing Trump. What did you do? You sat on your goddamned hands. While he praised Nazis. While he ruined our air and water. While he racked up literally DOZENS of crimes against the American people. Why? BECAUSE HE WAS A REPUBLICAN, and you filthy traitors would rather support one of your own than support America.

You’re a goddamned traitor. Right alongside each and every one of your colleagues.
And now, the crowning glory of your inaction.

We had Syria completely under control. We had ISIS almost entirely on the ropes, courtesy of Obama. Oh, but you can’t possibly handle having the success of a black man look good, can you?

In ONE DAY, you, the Republicans, turned Syria into a devastating Saigon evacuation, courtesy of Trump and your refusal to take him and Pence to task.

ISIS prisoners are freed, the group is re-forming as I write this.

We are air-striking OUR OWN bases because our troops didn’t have time to evacuate properly. RUSSIA, our worst fucking enemy on the globe, is occupying the bases we didn’t have time to destroy – and very likely their intel corps is poring over all the valuables left over in them.

Turkey is threatening those troops who are trapped in the area. (Fucksake, a third-rate shitty country like Turkey? A former ally? SHOOTING AT US?) Our allies are being massacred in what will be viewed by history as a Turkish genocide. Enabled by you.

FIFTY OF OUR NUKES are being held hostage in Turkey. Turkey may very well become a nuclear-armed fuckball country with our own captured weapons.

This is the result of a TRAITOR PRESIDENT, and a treasonous party supporting him.

Which includes you – who knowingly stood by while this traitor betrayed us almost daily. Who knowingly overlooked damning information about the man, overlooked that he betrayed the USA, simply because he was in your political party.

You execrable filth. You are almost worse than he is. At least he’s doing it for personal gain. You’re doing it for no better reason than the preference for one football team over another.

Each and every Trump supporter has a price to pay. Those who had a chance to stop him and refused the greatest of them all. I call on you to remember how Germany ended, how its leadership was treated once they were subdued. There were an awful lot of trials, convictions, and nooses at the end of that story.

The time remaining for you and your filthy pack of vermin to do the right thing is fast running out. The evidence will come out. It always does. And when it does, those who refused to stand up for our country will be held accountable.

I’m giving you a benefit of the doubt that you really don’t deserve, in assuming you are still debating the choice. If I’m right, you’d better choose soon, pal. Because the torches and pitchforks are selling out at Amazon.

And the 21st-Century Nuremberg trials are coming.

T

Edit: corrected location Hanoi for Saigon

Posted in Uncategorized | 1 Comment

GreenStrawberry EVA Pods for the 1/144 scale Discovery XD-1

Not long ago, I put together a version of the USS Discovery from “2001: A Space Odyssey,” and during that build I included a set of resin-case EVA Pods from GreenStrawberry. The base kit was the Moebius 1:144 scale (comes out about a yard long).

First off, let me tell you, wow – I was very impressed with the Moebius kit. Great detail, easy fit, and a really solid build. I added the Paragrafix cockpit and pod bay as well, as I wanted to get some interior to the model. GreenStrawberry also makes similar fittings, which are offered together in a 3-piece “Fruit Pack”, but I had bought the others six months before I knew about the EVA pods).

But let’s talk about the EVA pods – that’s what this post is about. They were really cleanly cast – no flash, included PE parts, and very good match-up with the kit. They’re a perfect add to the Discovery, since they play such a dominant role in the film (just ask Frank Poole).

The one thing I felt was missing.

Lights.

Neither the Moebius kit, nor the pods, were lit. That said, there’s loads of room in the rear engine pod and the command bulb up front. A 9v battery, micro switch, and some LED tape and bulbs in all the right places, and the model was ready.

But I really, really wanted to re-create the scene where Bowman is leaving the Discovery for the last time, and his pod is just departing the ship. One pod is missing (Poole’s), and Bowman was leaving from the center bay.

And in the scene, the pod has spotlights lit up.

Little fella’s all lit up, see?

All by themselves, the pods are great, and to use Gordon Ramsay’s terminology, I felt it was time to “take them to the next level.”

One small…and I do mean small…problem – the pods are solid resin.  No space for lights.

Did I mention they were small?

But, as I said, this is a *small* problem.  Nothing a little drill work and some creative wiring can’t fix.

So let’s get started! 

Parts you’ll need for this:

  1. Greenstrawberry’s EVA pods, of course.
  2. A 5mm cool white LED – preferably something with a high lumen value.  I used some 4.2-candle LEDs I had on hand.  Higher the better, I would suggest not going higher than maybe 9 candles to avoid heat issues.
  3. A resistor for the circuit.  I used 9v to power my Discovery, and ideally your resistor needs to be between 450-1,000 ohms. Lower value means brighter light, and minimally for 9v you should get 400 Ohms or so. You can get standardized resistors at around 470 ohms, so just go with one of those.
  4. Some really low-diameter wire (magnet wire is ideal – I think what I’ve got is about 36-gauge)
  5. Some 1mm fiber optic cable
  6. Drill bits in 1mm and 5mm diameters and a drill or pin vice that can handle them
  7. Side-snips
  8. Some good steel files or a Dremel with a cutting / sanding extension
  9. Reflective chrome or silver paint, some satin varnish/spray, and some gloss clear acryl or enamel
  10. CA glue
  11. A 9v battery
  12. A soldering iron or soldering station
  13. Solder and flux (some solder is “resin core”, in which case it has flux in it already)
  14. A set of “helping hands” – basically this is a magnifying glass and a couple of adjustable alligator-clip arms, usually available for ten or twenty bucks.

*It’s worth noting that #14, though technically not necessary, is so damned useful you’re going to wonder how you got along without one for a lot of jobs.

Alright – first up, you want the lights to be in scale with the model.  If I put SMDs up front on the face of the pod, they’re too bright (not to mention slightly too large to fit).  They’d also require wiring, which makes things a little too cluttered.  Better to light from the interior.

Slap a coat of white on the pod(s) first.  My own preference is something like Vallejo primer, but that’s not a must-have.  This will provide good contrast to see your work as you go.  Leave the little tab on the bottom of the pod, because it’s an easy grip.  Just remove it when you’re done.  Easy apply is to just stick some blue-tac or other putty to the end of a sprue, and embed the tab in the putty.  Spray and wait a little while for it to dry.

Using a sprue and tack to hold the pod while working
Coat of white gets things going

Next we need to make some room for the LED.  Start by putting a very shallow guide-hole in the center of the bottom of the pod.

Guide hole right up the…ahem…

Now the pod is only about 18mm tall, so you have to be careful about the depth of the next step. To avoid getting carried away, mark your 5mm drill bit at about 15mm length.  A CD pen or similar is fine.

Teensy is a good word for this.
Mark your 5mm drill bit so you won’t overshoot and blow the top of the pod’s head open

Once marked, drill CAREFULLY up the bunghole of the pod.

And be careful NOT to overshoot the mark

Once drilled, clean the gunk out of the hole.  Your next challenge is to open some channels from the spotlight locations into the center hole.  In each of the four spotlight locations, you want to open a 1mm hole aimed inward toward the main gap. 

The two upper spotlights should be aimed inward on the X-Y, but even on the Z axis. Each of the four spotlights will be needing its own space for a 1mm fibre, so they shouldn’t overlap.

We are going in through all four lights here, and we need to not overlap

The lower spots will be inward X-Y, and slightly upward on the Z – but you don’t want the channels to bump into each other.

Straight in for the top lights
Try to aim your bit towards the center of the main port

A tool that will make this a lot easier is the Tamiya power drill, but that’s not a must-have.  Open the light channels, and clean out the leftovers. 

Verrrrry useful

Get out your 5mm LED now, and test-fit it in the hole.  Should be a perfect fit, might even be a little bit tight, but that’s okay.

Cool white 4.2 candles was my choice
Checking the fit

Okay, if you’re satisfied that the fit is good, pull the light back out – we’ve got some more work to do. 

Let’s next connect the wiring.  Those of you reading this who already have experience with making your own circuits, jump ahead.  I’m going to assume I’m dealing with newcomers for a while. With each LED, the length of the legs denotes the polarity (positive and negative ends) of the LED.  The longer of them is the anode, or positive. 

The shorter is the negative, or cathode.  It’s possible that the legs were trimmed even, if so check and see if you can find a “flat side” of the LED itself, the pin nearest that should be the negative.

The negative (cathode) should be slightly shorter than the positive (anode)

For any LED circuit, you’ll need to connect your resistor to the negative leg.  However, it is fortunate that there’s no rule that says it has to be connected directly – there’s no room for a resistor on the bay.  I connected wires to each leg, and then my resistor to the end of the negative wire.  By the way, it’s also a good idea to always be consistent with your colors.  A handy standard is to make negative always black (this is often an industry standard, though I have seen exceptions), and any non-black color can be positive.  In the absence of black, just choose a darker color to be negative. 

We’ll begin the circuit by taking a couple sections of equal length wire, stripping the insulation from the last cm or so of length on each end, and “tinning” the wire.  Put the wire into the grip of one of the clips of the “helping hands,” and then apply some solder to the ends with the soldering iron.  You want the wires to acquire a “silvered” look.  You want to do the same to the legs of the LED.  The flux and solder will clean the wires/legs, and make them much more easily connected.

Tin the ends of the wires and the legs of the LED
Tinned connections also mate together much easier than raw ones

Next solder the resistor on the end of the black wire.  The legs of the resistor should be tinned as well. These won’t be the final wires for the model, they are only here for use while we fit the lighting. Once complete, we’ll remove these wires and connect up inside the model with magnet wire.

Once that’s on, tug it to test as well.  As long as it doesn’t come apart in your hands, you’re good to go.  Time for a final test – run some current through that beatch.  Get your 9v battery and put the resistor on the negative terminal, and the white (or whatever color) wire on the positive.  You should be rewarded with a healthy bright light.

A bit like that

Good deal, well done.  Let’s move on, shall we?

Let’s confirm that the pod is clear of any debris left over from the drilling, and establish how deep you should sink the light when the time comes.  Hold your 9v battery in the last two fingers of one hand, and put the contacts back in place to turn the light back on.  Hold the light by its legs firmly in the same hand and slide it back up the backside of the pod.  Just go to the lip of the LED, we’re trying here to establish whether or not there’s any garbage left in the spotlight channels, and also to see that they are generally pointed in the right direction to capture light. 

Check each of the four individually, by looking down the hole and confirming unobstructed light.

Should look something like those images above. Note that my lower left wasn’t perfectly aimed, so it wasn’t as bright as the others.

Next step.  Pull the light and set the pod aside. 

Take a close look at your LED. Inside, you’ll see the actual diode, which is the little bit of circuitry between where the legs end within the bulb of the light.  You may also notice a large clear area above the diode, which is basically dead plastic used to broadcast the light.  We’re going to trim a lot of that off now.

They all look pretty much like this, wherever they’re from.

First, saw or file off the head of the diode until you’re down to about 1mm of plastic above the diode.  Next, file away the brim or “lip” around the bottom of the plastic.

We’re aiming for a look like this

Test it on the battery again, to make sure you didn’t damage the diode or the legs.  If you did, pull out a new LED and get to work.  Come back when you’re done. 

Break out your paint now.  Take some chrome or silver (or just any reflective color you have access to – could even be gloss white or something, I’m not picky), and slather a good bit of it on the interior of the big hole in the bottom of the pod.  Use your drill bit to ensure you didn’t accidentally seal up any of the four spotlight channels after you’re satisfied with the coverage of the interior.

We’re establishing a reflective interior to magnify the light

Set the pod aside, and get a bit of gloss clear ready.  After trimming, the cut areas of the LED are probably very cloudy.  Paint the whole thing with a thin coat of clear gloss.  This coat will settle into all the irregular scratches causing the cloudiness and will re-estabish a glass-like cover.

Make it good as glass

Okay, time for you to take a break.  Put this guide aside and go have some food, a beer, and relax a bit.  Your pod and the LED both need time to dry anyway.  Park the pod in such a way that any excess inside will flow to the “ceiling”, and set the LED aside to dry as well.  Conveniently, the “helping hands” has two alligator clips suitable for just such a purpose.

Go ahead, go eat.  I’ll wait.

You’re back, great!  Let’s slap the decals on the pod now.  When putting them on, take note that the decal card Greenstrawberry uses isn’t the super-fancy stuff that big companies have, it’s one solid sheet rather than individual decals parked on a backing page.  So when you cut them loose, cut them as close as you can without damaging the decal itself.

Additionally, the “earmuff” thruster packages on the EVA pods should have their centers removed to make your life easier.  With a very sharp razor knife, cut the center in a circle.  Once the decal has soaked for a while, before you remove the outer circle for application you can flick out the center using the tip of your razor.

The cut will look a little like that

In a case like this, although it isn’t an absolute must-have, I really do strongly recommend a decal softener.  In case you don’t know what that is, it’s a solution (practically every model dealer carries some form of it from various vendors) that will semi-dissolve your decal against the surface of the model.  This has the effect of eliminating “silvering” of the decal material, and provides a “painted on” look to the decal once it’s dry.  My personal experience lends itself to Micro-scale’s “Micro Sol” and “Micro Set”, but your mileage may vary.

Decals on, we’re ready to move

My cockpit window came out a little rough, so I’m going to dress it with a little bit of black paint.

For the more daring lighting fiends out there, yes, you could conceivably drill out that window space to expose the LED space.  Yes, you could, after anchoring the LED, fill it with clear resin to harden up into the window space and then paint some transparent red patches and black and so on, and light the cockpit. 

But if you’re going to do that, you’re way beyond the scope of a guide on the basics like this, aren’t you?  Let’s keep it simple J. 

Give your pod’s decals time to cure, best would be overnight, but at least a few hours.  (I know, it’s hard to let it sit, but go find some other part of Discovery to work on for a while.)  Once that time has elapsed, slap a satin coat on them to protect your work. 

We’re in the home stretch now.

Push the light back in about the same depth as previously done when you were checking the spotlight channels.  Check them again, and try a few different depths to see if you can find a “best light transmission” point for all four.  This step is just like what you did earlier. 

Once you have it where you want it, use a small dose of CA glue to park that light in place and keep it there.  After this glue dries, this is a good time to saw or cut off the retained tab that you’ve been using as a grip this whole time.

Bye-bye little fella, you did good

Make sure not to saw through the LED legs when you’re doing this.

Paint with the reflective paint you used for the interior over the bottom of the LED.  Try to keep it off the legs of the LED as much as possible.  After the reflective stuff is dry, paint black over it.  Test the light to see if you have any light leaks.  When you’re satisfied that there aren’t any, paint white over it.  Again, try to avoid the LED legs.  Not fatal if you get paint on them, but easier if you don’t have to burn that paint off later.

Light-block the bottom with first silver/chrome, then black – check it after it dries, you might need another coat of black

Regardless of the scene you want to replicate, those LED legs are going to need to be trimmed and concealed.  Using the sharp point of a triangular file, cut two small channels into the back of the bottom of the pod so you can fold them back into those spaces.  Paint the channels white again, unless you’re going to putty them over later (in which case the paint can wait until after the putty and sanding). 

Back at the front of the pod, it’s time to use that fiber optic stuff that’s been watching you from the side of the table. 

With each of the four spotlight channels, first insert the fiber optic and jam it in there as far as it will go, marking or measuring how much that is.  Withdraw the fiber, and take note how deep it should be.

Next put a tiny amount of gloss clear as far down into the channel as you can go.  I find best results injecting it with an insulin syringe (you can also acquire blunt-end syringes easily), but it can also be dribbled down with a pin if you have steadier fingers than I do.

I use a glossy white bathroom tile as a base for this work – cheap as hell, very durable

Your objective here is to provide a clear seal that has as little light-blocking properties as possible between the LED itself and your fiber.  It’s also possible that you’ll end up with gloss clear coming out the other holes – that’s okay, it just saved you some work.

Glug glug glug!

After the gloss goes in, immediately follow by pressing in the FO fiber.  Go as far in as you can with it, and wipe up any gloss that bleeds out the hole.  Once you’re convinced it’s in the right depth, use your side-snips to cut it flush with the spotlight opening.

The gloss fill spread amongst all four floods on mine – totally cool, just needed a little wipe

Repeat for each opening.

Give these an hour or so to dry, before again satin-coating the exterior once more. 

Next, apply gloss clear as tiny droplets to each of the four spotlight openings, and the cockpit window. 

Go get that little brass card that came with the pods.  Separate the arms of the pod from the brass PE sheet.  Anchor them in the tac on a stick and paint them white.  Satin coat them when that’s dry.  Make sure that the “hands” don’t accumulate too much paint and fill up – you want to be able to see the “fingers” clearly.

Before…
…and after painting

Definitely be careful not to overdo it with the paint, you can end up filling those hands.

When they are dry, trim the ends down to where they should be, dip them in CA glue and apply to the mounting point between the spotlights.  Once they are seated, you can apply a little more CA to each of them to reinforce your mount.  Now paint a bit of satin white over the CA.

Trim carefully, or you’ll end up bending something or worse – feeding the carpet monster

Test your wiring again, just to light her up and enjoy the feeling ).

A little light porn to go with your reading…
Check it in shadow too, to make sure you have her set right.  Not much you can do to correct it at this point, but since you have three of these pods, you can decide if this one belongs in the bay or outside the ship

In my case, the pod was resting on one of the landing pads (a separate 3rd-party package of PE), and the pad in question was extended out of the “mouth” of the command deck on some square brass tubing from Albion Alloys.  I connected magnet wire as thin as I could find to the LED legs of my pod, as closely as I possibly could get to the pod itself, and soldered it in place there.  I then trimmed off the excess leg and folded he remainder up and back into the channels I’d cut in the bottom of the pod.  A bit of white paint and all was hidden away.  As the back of the pod faced the model, a little rough edge there would not show.

If you’re actually looking for it, you can see the legs and the magnet wire – but when you build, you’re aiming for a view from about a meter away, and the angle of view obscures the wiring

The magnet wire was then routed behind and under the platform, then forward underneath and glued to the corner of the tube where it met the pod platform, so that the length was now pointing “forward”.  I took that length and reversed it again, this time threading down through the tubing and out the back of the pod bay, inside the command bulb.  I ran a little tiny bit of white putty along the corner, and plugged up the end of the tube, then covered that with white paint.  The wiring was rendered invisible. I glued the pod in place on its platform, and all was right in the world.

The obligatory “Hero” shot…cape sold separately

While the Discovery herself is a fantastic model to build, adding something like these Greenstrawberry pods to her completely elevates a beautiful model to something absolutely out of this world.  I hope what I’ve written here helps you all to add some really spectacular lighting effects to these great little pods.  When I consider the idea of building the bare kit, these pods and the extra lighting effect have turned Discovery into an absolute hero of my display area.  She’s the new star of the show here.

As always – and if you have any questions, comments, or death-threats, by all means send them on to me via the Facebook groups, or direct to my email via the site here. 

Happy modeling! 

Posted in Build Log, Model Kits, Review, Sci-Fi, Uncategorized | Tagged , , , | Leave a comment

Plastics.

Okay, I had to write this. No, seriously, I had to write it. It’s an assignment for my B2 Deutsch class.

So, highlights in Deutsch:

Es gab eine Periode der Geschichte, die Karbonperiode genannt wurde. Es war vor etwa 360-300 Millionen Jahren. Während dieser Zeit gab es nicht genug Bakterien, um alle Pflanzenschlachtkörper zu fressen. Wir sehen das Ergebnis als Kohle und Öl.

Vor kurzem (vor 100 Jahren, +/-) hat der Mensch Plastiks ergrunden.

Nichts auf der Welt wusste, wie man es isst. Es wird Millionen von Jahren dauern, bis etwas lernt wie.

Plastiks sind in unserer Umwelt so verbreitet, dass Sie und ich ungefähr fünf Gramm Plastiks pro Woche in unserer Nahrung zu sich nehmen.

Dies ware nicht schlecht, außer dass einige Platiks vom Körper als Hormone angesehen werden. Dies kann schwerwiegende gesundheitliche Probleme verursachen. Aus demselben Grund schädigen sie auch die Ökologie erheblich.

Und sie häufen sich. So sehr, dass zukünftige Zivilisationen in Millionen Jahren vielleicht sehen können, dass wir aus der von uns aufgebrachten Plastik schicht hier sind.

Was konnen wir tun?

Kaufen Sie keine Einweg-produkte aus Plastik mehr.

Ersetzen Sie Ihren Plastik kauf durch Papier.

Stimmen Sie für Politiker, die die Umwelt reform unterstützen.

Sprechen Sie mit anderen über das Problem. Erhöhte Aufmerksamkeit. Und wenn Sie mit Leuten über das Problem sprechen, erwähnen Sie, dass sie letzte Woche genug Plastik gegessen haben, um eine Kreditkarte zu machen.

Unterstützen Sie die Erforschung von Platik ersatzs toffen oder erforschen Sie, wie Plastik abfälle zerstört werden können.

Endlich müssen wir, ähnlich wie bei fossilen Brenns toffen, den Einsatz von Plastiks schränken ein. Oder ändern Sie die Art der verwendeten Plastiks.

Was ist das Schlimmste, was passieren kann? Eine Welt ohne 100 Milliarden Zigarettenkippen, die jeden Monat in die Lebensmittelkette gelangen? Meeresschildkröten verschlucken sich nicht an Taschen? Vögel verschlucken sich nicht an Bierdosen ringen?

Ich denke, ich kann mit einer Welt ohne diese leben.

Zuruck nach English…

Okay, let’s be real about this.

I know it is only supposed to be a few sentences, but this topic is too important for me to just gloss over.  Even if it is just an assignment for a Deutsch class.

Plastics.  Recycling them.  Doing away with them.

Let’s begin with some history. 

About 360 million years ago, plants pushed the atmosphere into an oxygen-heavy state.  So much oxygen, that it enabled insects like the dragonfly to reach the size of a large hawk.

This had the effect of reducing bacterial levels to such a low level that there weren’t enough of them to eat the plants as they died naturally.

So the dead plants piled up.  Really, they just piled up.  They eventually became peat beds.  After that, they were compressed (by geology) into coal.  This took place over a period of about sixty million years.  Piling up.  Like a teenager’s laundry gone crazy.

That period of time is called the “Carboniferous” period.

The key here is that those bodies didn’t get eaten.  And they were buried into the layers of rock and mud.  We call this “geologic subduction.” 

So today, when you talk about coal and oil, you are actually talking about humans digging up sixty million years’ worth of carbon.  Digging it up and burning it. 

Which is a problem orders of magnitude greater than plastics.

But we can walk and chew gum at the same time, can’t we?

So, plastics. 

Here’s the news on plastics:

In 1907, a fellow from New York named Leo Baekeland created a substance called “Bakelite.”   

Bakelite

This stuff was useful for a lot of things, such as electric and heat insulation.  It was made into a number of different things.  Radio and phone cases.  Kitchenware.  Jewelry.  Pipe stems.  Children’s toys.  Even firearms. 

But what it was really useful for was showing the world how to create totally artificial materials.  Materials that were very easily shaped and extremely durable.  Oh, and cheap.  Let’s not forget cheap.

Over the next century, humans got very busy manufacturing stuff from various plastics.  It would be a challenge for any of you reading this right now to look around and not see something made with plastics.

This was all really cool.  Lots of people got rich, lots of things got cheap.  It seemed like a win-win.

But no one really gave a thought to biology.

Practically everything natural has something that eats it.  Dust has mites.  Poop has beetles.  Humans have big cats and sharks. 

But nothing on earth knows how to eat plastic. 

This is primarily because plastic has a unique shape to its molecules.  There aren’t any natural enzymes shaped right to grip plastic and rip it up to eat it. 

So, what we see today is something like the beginning of the Carboniferous period.  Plastics are piling up because there’s nothing that knows how to eat it.  And they are piling up in the ocean.  When a plastic bag from Rewe or Kroger hits the ground in the parking lot, it will eventually be bleached by the sun and shredded by wind, feet, tires, and so on. 

Those little shreds end up being washed into a sewer, or a stream. 

Those feed into rivers.

Which feed into the sea.

And little bits of plastic look like the traditional food that plankton eats. 

So those little creatures called plankton eat it.

And fish eat the plankton. 

And bigger fish eat those fish.

And eventually, bigger animals eat those fish.  Animals like us. 

In fact, there is so much plastic in the food chain, that we people eat about five grams per week of “micro” plastic.  That’s like eating a credit card or a pen every week.  In a month, it’s 21 grams.  That’s about as much as a comb, or a clothes hangar. 

Now, that normally would pass right through us, since we can’t digest the stuff any better than other creatures.

But plastic has some hidden problems.  For example, some plastics are shaped just right, or emit chemicals that are shaped just right.  Just right for what?  They’re shaped like natural hormones.  And our bodies react to them like normal hormones.  Which can throw a child’s growth out of whack.  Or induce thyroid disease.  Or cause a man to grow boobs. 

I think I might have given you enough now to react.  Perhaps your reaction is “Holy shit, what do I do?”

Well, a few things.

For starters – stop buying one-use plastics.  Use paper sacks at the supermarket.  Don’t buy plastic straws.  Recycle everything.  Use glass.  Use paper.  Use wood.  When you have to use plastic, make sure it will last you.  Get wax paper instead of cling-film. 

Vote.  Vote for candidates who recognize the problem for what it is.  Vote against candidates who deny there is a problem.  Viciously mock people who pretend there isn’t a problem. 

Vote for candidates who support scientific research into bio-plastics.  Like the research recently completed by a Mexican scientist who made plastic out of cactus.  That plastic degrades in a month, and people can eat it safely. 

Support research on biological plastic destruction.  Just because nothing knows how to eat it today doesn’t mean there won’t be something next year.  Researchers can invent microbes that eat practically anything.  Who says there isn’t a possible digestive path for plastics to be found? 

Speak to others about the problem.  Don’t bother talking about the Carboniferous era when you do.  Just tell them nothing on earth knows how to eat that stuff.  Tell them that Porsche automobiles are long-lasting: two out of every three ever built are still out there.  Then tell them plastic has that beat: every piece of plastic ever made, except for the ones that were incinerated, is still out there. 

And tell them how much of it they ate last week. 

In the end, much like fossil fuels, we have to curb our use of plastics.  Or change what kind of plastics we use.

What’s the worst that can happen?  A world without a 100-billion cigarette butts entering the food chain every month?

A world where sea turtles aren’t choked to death on plastic sacks? Where birds don’t die caught in beer-can rings?

I think I can live in a world without those things.

https://www.bbc.com/news/av/stories-48497933/how-to-make-biodegradable-plastic-from-cactus-juice

https://www.adweek.com/creativity/you-eat-a-credit-cards-worth-of-plastic-each-week-says-this-unsettling-wwf-campaign/

https://ucmp.berkeley.edu/carboniferous/carboniferous.php

https://www.nationalgeographic.org/encyclopedia/coal/

Posted in Uncategorized | 4 Comments

Happy birthday, Delphi :)

Back on Feb 14 in 1994, a long-awaited event took place – Borland’s “Turbo Pascal” got a revamp on its entire genome, and a new species of tool was born: Delphi. Other tools existed in a similar fashion at the time (most notably Visual Basic v3 and PowerBuilder), but none of them were nearly as fast or as capable as Delphi. Right out of the block you knew this tool was going to be the new gold standard of development systems.

It’s still running strong today – though not as noticeable in the market, mainly because the owners of the system don’t have the deep pockets that its competitors do. That’s always been the case, though. Microsoft always had more cash and BS to fling around, so what did we do?

We forced them to become us.

Much like when Rome conquered Greece, Greek culture won the overall battle. Visual C# from Microsoft is, for all intents and purposes, a fork of Delphi – looking at the code of the two, you can see how easily the two compare. However…

Today Delphi isn’t just about Windows – with it, you can target Windows, Linux, Android, and even (*hurk!*) Apple devices. I’ve even begun messing around getting it to talk to Arduinos lately.

I have to say, 25 years on, it’s still a rocking platform.

Posted in Uncategorized | 2 Comments

How to build an un-hackable password

Okay, another friend got hacked yesterday – here’s how to build an un-hackable password:

1. Pick a favorite date, like Bastille Day, your dog’s birthday, the moon landing or something.  You could also do something like a favorite movie mixed with it’s author’s name.

2. Reverse it, so for example today would be 8102voN82.  You could do the film thing like “Clarke1002Arthur”.

3. Pick two letters from the site you’re visiting or app you’re using, best is the leading letters of the first two syllables, so FaceBook would be FB.

4. Pick two numbers and use the Shift key to make them special chars, so 3-4 would be #$

5. String all these together to make your password, so my example here would be Clarke1002ArthurFB#$. Or add them together the other way to make #$FBClarke1002Arthur.

You can re-use this algorithm anywhere, it’ll give you a unique password for any site or app, and you only have to remember the pattern you use to build the password. It should take an average PC about 3 million years to crack it brute-force style.

Posted in IT, PC Stuff, Software, Tips, Work | 3 Comments

Leadership of IT/Software Teams in a Regulatory Environment

Part 3 – now let’s introduce regulation

First part of this, is let’s define “regulation” in context.  Specifically, I’m referring to ISO 13485, EU medical devices (GMP, or “Good Manufacturing Practice”), and FDA medical device standards.

Let’s also be clear, these aren’t just “guidelines” (insert Barbarossa quote here), they’re law.

As law, what is their purpose?  The goal of this regulation is to make certain that the products made by manufacturers are safe, effective, consistent and unadulterated.  As a result, this means that they end up being implemented in such a way as to minimize mistakes and errors, and reduce or eliminate contamination where such can occur.  Consumers as the beneficiaries of the regulation are guaranteed a high level of security that the products and devices they are exposed to do the job they are supposed to in a way that is not dangerous to them.

It is also worth noting here that as law, violations can result in steep fines, product recalls, and potentially even jail time.  This is serious stuff.

That means you, as a leader, have the responsibility to implement compliance measures, and to ensure that your IT and Software groups understand and follow the measures you put in.

The requirements of regulation can result in direct opposition to some of the activities your group performs.  Quite often a software group will be operating on an “Agile” methodology, with possibly scrum sprints and so forth – and these methods are oriented towards speed.

Regulatory requirements will impair that speed and that slowdown will appear to be simply “getting in the way” of work.  Persons unfamiliar with the need for regulation will see it as unduly burdensome – indeed, I can quote a peer who totally missed the mark of what those regs were for:  “No one cares anyway, these are just here to make it look good.”

Needless to say, he didn’t last long.  Sadly, many developers – including most who come directly from university – have no experience with such regulation and will view them as unnecessary impediments.  I myself, as a software developer many years ago, left a contract that I felt overly constraining to me because it was operating under FDA medical devices regulation, and I didn’t appreciate why it took six months to get from proposing a spelling change to final deployment of the release containing the correction.

These requirements make progress feel very plodding and restrictive, particularly for those junior members of your team.  And they never end.  They are part of the job, every single day.  It isn’t just something you do once, qualify for, and then amble merrily forward – these actions and activities go into the job and become part of the fabric of how you operate.

What can good leadership do in such a conflict?  What is to be done here, and how can we exert good leadership in this environment?

Let’s examine the conflict first.  There are a few sources of conflict here:

  • Requirements for thorough examination of work products and the preparation for creating them can seem like an attack on the respect for an IT professional’s work.
  • The burdensome nature of documentation and preparation in advance of work can create a very tedious work environment, making it hard to feel like you want to go in.
  • Effects of both of the above and other impacts of regulation can fray the nerves a bit, and place demands on your mediation skills that you didn’t expect.

Needless to say, rising to the occasion here requires a lot of patience and discipline.  It also needs a great communicator, which I’ll get to in a moment.

Let’s begin with organization.

A lot of what you do, whether it is network infrastructure or software development, will require up-front planning – and most importantly, documentation of that planning, in order to provide an audit trail.  Whether you’re aiming to be compliant with ISO, GMP, or FDA, there’s a key question that you have to ask before you begin an operation:

Will my action, or any effect of my action, have direct impact on our final product?

For some things, the answer will of course be “no” – for example, establishing a new backup plan for your email server.  Regardless, before you begin, that question has to be asked and the answer documented.  For most regulation, that documentation ends when the answer is “no.”

However, what if the answer is “yes”?  In that case, you have to assess what you are doing, why you are doing it, why you think doing it will satisfy the original stated need, what are the risks, plan mitigation for those risks, and have your rollout staged to assess success or fail conditions at every milestone.  This entire process is generally called “validation”.

In my own context, I created a form that enabled myself and others on my team to ask that question, and then to lay out the long set of considerations on changes to hardware and software in the IT group.  That form went into our Confluence server, and could be linked from there to Jira tickets created to represent the progress of the tasks being tracked.  For your own use, in whichever issue-tracking system you use (I refer to Jira here, just because it’s so common and it’s the one I’m most familiar with), you can put a field in your issues asking the very question about impacting the final product.  Then you link it to a new issue page with the content required for the “yes” answer.

And once filled out, the page may need to be locked in read-only status (GMP and FDA requirements both demand that no editing or deletion be available after finalization).

This satisfies the regulatory requirement, and the organization provides your team with clear addenda to the original requirements.

The need for organization (and communication, below) requires your Commitment.  Not just commitment to the team, but commitment to the mission of the company and the production of whatever products require the regulatory oversight.  Your IT / development group are key players in delivery of solid product to market.  If the commitment isn’t there, the team will know and it’ll show in how you deliver.

Next step we should look at is communication.

Quite possibly, this should be your first consideration, but I wrote it second here and we’ll leave it at that.

In your team, from day one of a person’s start or day one of implementing compliance, communication of the compliance effort will be key to making sure everyone stays on board with it.

I’ve found in both software development and network architecture that the most important factor in keeping the team aligned is making sure everyone knows and is clear about the answer to the question: “Why are we doing this?”  Knowing why gives us all not only a common ground and a team unifier, it also helps us all determine potentially better solutions than we could with just a team head knowing the why and issuing directives to meet it.

This also applies to ensuring the team gets on board, and stays on board, with compliance efforts.  They have to see the broadest picture of how the regulations help the business.  If your company manufactures widgets used in surgeries, your team needs to be reminded (perhaps even daily) that what you do helps people safely undergo and survive life-saving operations.  Their actions, every one of them, can potentially impact how well a widget works after manufacture.

In a lot of ways, this means that what you’re doing is linking the following of the regulation with the provision of a quality product, and instilling a culture of quality that goes into the tiniest details of everything your team does.

Which, when you think about it, is generally needed for a company to become great, isn’t it?

I make it sound easy, but it isn’t.  You have to beat this drum every day, and you yourself, as the leader, need to be its biggest evangelist.  Your Personality and Knowledge (see Part 1) tie in here, and are key factors in enabling this – you have to be able to be positive about the requirement of regulation and knowledgeable about its execution.  There will be days when you are tired, and simply don’t want to deal with it.  But remember – those widgets depend on you.  They depend on your team.  And the people undergoing surgery depend on you all.

Finally let’s talk expectations

I don’t’ want to really call this a “final” topic, because there’s an enormous quantity of factors that can affect this topic.  But I only have so many hours of the day, and I’m calling these my top three items for being a successfully leader in this environment.

Setting expectations of stakeholders and team members for the execution of projects and tasks is a key element of all work, whether it’s regulated or not.  Telling your boss how long project Y will take, and getting good estimates from your staff on how long tasks A, B, and C will take are key to that.  Regulations increase workload, there’s no two ways about that.  They also slow down progress.  But they do enhance quality.  They benefit consistency in the product(s) your company makes.  Knowing what features are being prepared and what to expect in each release, as well as knowing what steps are being taken to mitigate the risks involved will put everyone more at ease (and will ensure no interruption or disruption in production).

Linking the goals of the regulation with the production of a quality product falls directly into your skills of Motivation for your team.  Getting the team’s buy-in by involving them in the setting of proper expectations is the way to ensure the best possible motivation for success.

Making sure your team knows to take into account the added burden of the regulatory requirements will get you a good ways towards ensuring that you don’t have overblown demands on your team.  It also involves them all along the way to ensure they retain respect and participate in their own work environment.  This really applies to all environments, but it deserves special attention in a situation where a great deal of up-front and ongoing efforts require such detail.

Is there some magic bullet here?

Well, no.  Obviously there’s no “silver bullet” answer to anything in IT, but there are some very cool tools that can help you along the way.  I’ve mentioned Jira and Confluence already, and these are insanely useful in establishing organization, team communication, and helping to set expectations.  Really if I were setting up practically any environment, these tools would be first on my list.

There are also document management systems which enable GMP/FDA-compliant protection of docs, which might be required.  When these enter the picture, I generally advise that one keeps only what is absolutely necessary in such a DMS, and the rest in Confluence.

Additionally, Jira or other issue-management systems can be tailored to monitor risks and mitigation efforts, as well as validation efforts.  The reporting capabilities of these systems can make scrum meetings much simpler, as well as providing outgoing communication to non-IT stakeholders in the form of expected- versus delivered-work estimates, etc.

In Summary

Gathering this up, a regulatory environment heightens the need for a clear communication path as well as requiring a more organized IT department.  The company’s, and really the market’s, expectations of your firm’s output puts an additional burden of caution on your IT staff.  This may not be suitable for all tech people, and there’s no shame in recognizing that you might not be one of those people for whom this is a good working environment.

If you’re comfortable with it, though, you’ll find that your skills in communication and organization are being called upon considerably more strongly than in a “normal” non-regulated situation.  You’ll still be responsible for aiding in motivation, commitment, and the rest, but GMP and FDA regs put you in a special situation where you have to be stronger in certain areas to enable your team to succeed.

Leadership in its raw form doesn’t change – but different aspects of it are called upon more strongly in a regulated environment, and you need to be prepared to engage appropriately.

Part 1

Part 2

Posted in Business, Business, Development, IT, Leadership, Programming, Software, Work | Tagged , , , , | Leave a comment

Let’s Talk Briefly about Deep Learning

You’ve heard the name over and over, and for most of you it probably settles into the same category as Harry Potter’s “levitation charm” as far as whether you need to understand it.  That’s cool, most people will never need to know this stuff, in the same fashion as you don’t need to know the specific chemical reactions that go on when gasoline burns inside your engine cylinders.  You just want to turn the key and go!

When the term “Deep Learning” started getting used, the media clamped onto it because it was a pretty sexy marketing term, and it sold a lot of print and eyeballs.  This is because we have a somewhat intuitive sense of what we conceive “Deep Learning” to be – it conjures up images of professors doing serious research and coming up with great discoveries, etc.  Because of this, a lot of intrinsic value is being assigned to the topic of Deep Learning, so much so that it seems a startup can automatically generate a B-round or further simply by stating that “we use Deep Learning to make the best decisions on what goes into today’s lunch soup”.  That’s a bit of a stretch, but not by far ).

So What Is It?

We need to start by looking at the name:  Deep Learning.  These two words, just like pretty much everything in computerese, have very specific meanings which often don’t attach well to the real world.  In this case, they do, which is fortunate for me because it’ll help keep this article brief.

Let’s tackle each word separately.

Learning.  A DL system quite often has a multitude of modules or units, each of which is responsible for Learning something.  If it’s a good object-oriented model, each unit is responsible for learning one thing, and one thing only, and becoming very good at that one thing.  It will have inputs, and it will have outputs, and what it does internally it might involve multiple stages of analysis against the inputs that help it craft an output that has “meaning” to the other parts of the system that use it.

What would be a good example of a learning unit?  How about something in image identification?  Within a satellite photo, parts of the image get pulled as possible missiles, or tanks.  These parts have attributes of their parts, such as the ratio of length to width, possibly their height (if the image was taken from an angle not directly overhead it will be possible to calculate height), maybe differences in shading top to bottom (the turret of a tank will produce shadows), etc.

A unit that is given candidate images might build a statistical history based on its prior attempts to classify, its positives and negative results, and may (if it is particularly advanced) also ‘tune’ its results based on things like geographic location and so on.  In its operation, it will come back with a statistical probability that object X which it was given is a tank or a missile.  Over time it will be ‘trained’ – either by itself, or by humans inputting images to it with flags saying “tank” or “no tank” (when humans give it cues like this it is called “supervised learning”), to be better at identifying candidate objects.

Deep.  The ability of a computer system using Deep Learning is generally amplified by how many different things it can learn, and each one of these things that can be learned, when placed in line with one another (or if you go for the visual, they can be “stacked up”) produce depth.  A system that can stack many layers is considered Deep.

The example above, where a system was given a photo object, might be part of a system which takes a single large photograph, and multiple different units act on it.  Let’s walk through the hypothetical layers:

The first might look for straight lines.  It trains itself to be better and better at examining colors of pixels in images and finding ones that line up as straight, given the resolution of the image.

The second looks for corners, where straight lines meet.  It takes the identified straight lines from unit one, and then will seek places where their ends meet.  It will train itself to avoid “T” bits, and might decide that “rounded” corners are acceptable within a certain threshold.  It outputs its data to…

The third, which takes corners and try to find ‘boxes’, places where multiple corners might form an enclosed space.  It must train itself to avoid opposite-facing corners, etc.  It then sends its candidate ‘boxes’ to:

The fourth, which takes boxes and begins to look for color shading gradients which can describe sides, tops, fins, etc.  It compares these values to its own historical knowledge and ‘learns’ to classify these boxes as specific objects – and outputs things like “75% missile” or “80% tank”, etc.  Particularly sophisticated versions might even be able to compare signatures finely enough as to identify the type of missile or tank.

Each of these units described above might be comprised of multiple sub-units, which share connections with one another (which, by the way, is a “neural network” – different discussion, might write about that some time too).  These sub-units would be hidden from the units outside itself.

So between the two, we have a Deep system, that Learns to do its job better over time.

And before you ask, yes, there is also “Shallow Learning” – Shallow simply refers to a “stack” that doesn’t have many layers.  There’s no set boundary between what “Shallow” versus “Deep” is.

How Good Is It?

As with pretty much every computer system ever invented, the answer to the question “How good is it?” is:  GIGO.  Garbage in, garbage out.  The system is only as good as its training.  In the example above, if insufficient valid positive and negative images are given to the system to train on, it can suffer from muddied “perception” and never get better.

However, DL is powerful.  By that, I mean that when compared to a human, it can reach or exceed our capabilities within its specialized tasking in a very short (comparatively) period of time.

For example, I have a set of rules I built in Outlook over the course of probably fifteen years or more, and these rules successfully negate 99.5% of the spam that lands in my inbox (today’s example: 450+ messages are in my ‘Deleted’ folder, about 5 landed in my inbox that had to be deleted).  Occasionally, I get a ‘false positive’ and a good mail will get deleted, but it’s pretty rare.  These rules have taken over a decade to produce, and act on a host of subject triggers, address triggers, body triggers, etc.  A DL system can establish a similar ‘hit rate’ to my rules in only a few days, perhaps as fast as a few hours.

But these factors depend on how well the system is built, and how good its learning data is.

What Does It Mean To Me?

Well, now that’s the catch.  Depending on your life, DL systems may not have a direct impact on you at all.  Plumbers, for example, are unlikely to care.  Insurers, actuaries and other folks whose livelihoods depend on statistical analysis, however, had better sit up and take notice.  The initial “green field” territories where statistics are the primary function have already been broadly affected by Deep Learning.  Today, areas such as FinTech and advertising sales are steadily moving to use DL in certain aspects of their business.  Self-driving vehicles are a perfect example of another “Deep Learning” application.  What do you think those first few highly-publicized autonomous vehicle voyages were a few years back?  Supervised training.  They were teaching the vehicles how not to get into wrecks.

We’re just beginning to see learning systems entering healthcare and other more ‘soft’ sectors.

And here is where the warning bells sound.  Not because SkyNet is going to set off the rise of the machines (though there is some legitimate reason to be concerned in that regard, particularly when you see robot chassis and drones armed with weapons).  No, the concern should presently be directed at how these tools get used.  As I mentioned, these are powerful systems that can be used to great benefit – and can also be used to do great harm.

For example, one of the innocuous sentences I’ve seen with regard to the application of a learning system to healthcare was:  “Given the patient’s past history, and their medical claims, are you able to predict the cost for the next year?” (Healthcare IT News).  Okay, in context, that question was raised with the intent of predicting utilization, how much hospital care might be needed across a population.

But what if the question is “Find the best combination of interest rates to keep people paying their credit card bills without completely bankrupting them, and to maintain their indebtedness for the longest period.”   In that case, Deep Learning can be used to figuratively enslave them.

What if that question was asked by an insurance executive in the USA, wanting to see where the profit line cuts and using that data to kick people off their insurance who would negatively impact the company’s margin?  In that case, Deep Learning can be used quite literally to kill people.

The tools will only be used within the ethical boundaries set by the persons who use them.  In the United States and several other countries, there are certain political parties who feel that ethics have no place in business – that might makes right.  Just as with dangerous vehicles, dangerous weapons, and other hazards, we as members of our societies must make our voices heard – through the voting booth, in our investment choices, in journalistic endeavors – and ensure that these tools are used to benefit, not harm, the public.

It might even be worth considering that from a software engineer’s perspective, perhaps it is time to establish something similar to the medical professional’s Hyppocratic Oath:

First, do no harm.

Posted in Business, Business, Development, IT, Politics, Programming, Science, Software | Tagged , , | Leave a comment

Leadership of IT/Software Teams in a Regulatory Environment

Part 2 – How Does This Apply to IT?

All right, so we’ve laid out what it means to be a leader – but how does this apply to IT?

At this stage we have to look at what it means to work in an IT environment – and there are some wildly different attitudes to deal with.  Let’s approach this from the two extremes, as the rest fall into an in-between type.  I will differentiate them using names that are general stereotypes, so please forgive me if you feel yourself categorized…my intent is to speak of generalizations, not specific persons.

The two types are:

Administrators:  generally these are “IT Managers”, “System Administrators”, “DBAs”, “Network Administrators”, or various other similar names.  The person with one of these labels is generally tasked with keeping things running smoothly so that users on the computer system can perform their tasks efficiently.  As a result, these persons are driven largely by the avoidance of downtime – and this means maintaining the network’s status quo.  All operations around operations such as monitoring for fault, storage maintenance, backups and recovery, etc.

…and…

Developers:  these persons are focused on creating new systems and software to enable users to accomplish their tasks in innovative or more effective ways.  These can be web developers, application developers, front-end developers, etc.  This career is dominated by a constantly-changing landscape of new languages and architectures, and where the Administrator has a fanatical devotion to defense of the existing systems, Developers have an equally devoted attitude towards inventing new systems.

Naturally, these two come into conflict when Developers have new systems they wish to add to the production networks.  Lately (in the last 5-10 years), this meeting point has seen the growth of a new “bridge” career, DevOps.

Additionally, let’s not overlook the elephant in the room here.  IT staff have a reputational stereotype of being less than optimal in their social skills.  I think we can flatly state that there is more than just a grain of truth to this, and in handling your staff, one should take this into account.  This isn’t making excuses for them, it is preparing you to handle them effectively.

Despite their differences, each of these career track personnel has many similarities when it comes to the exercise of leadership.  This is not an exhaustive list – just the major bullets.

Desire for Respect

Let’s face it – most of these individuals have an ego, and because they are not especially social creatures, they tend to be a bit sensitive.  Fortunately, as many of these persons tend towards an introverted personality, they don’t demand a great many special efforts in this regard, and often find displays to be uncomfortable.  That doesn’t make their need for respect any less valid, however.  In many ways, it actually makes it more challenging.  Often it can be best shown with an honest thanks for contributions.

Unlike a salesperson, whose recognition is often quite public and rewarded monetarily (my suspicion of why this is, is because it has been so easy to measure the performance of a salesperson in the volume of revenue they generate), awarding respect to an IT person tends to require a more personal touch.  It shows you know what they are doing for you (even if you don’t necessarily understand the nitty-gritty of it), and that you see the effort it requires to accomplish what they do.

Respecting your staff shows them that you do care, that you have commitment to them, is integral to good communication with them, and is a source of motivation.

Need for a Quality Work Environment

IT Workers spend a great deal of their time thinking.  This level of thought requires uninterrupted time spent researching, exploring, experimenting, or even simply sitting.  It is the antithesis to the “buzzy” open office which seems so popular among business-degree managers the last couple decades.  (One can easily unearth enough research to choke a horse demonstrating how bad an open office is for any worker at all, much less IT staff.)

I often tell people “IT isn’t rocket science,” in a turnaround on the classic cliché – “Because rocket science is only first-year physics, and IT is harder than that.”  To perform this sort of thought-work, one must have a work environment that protects the workers’ concentration.

In addition to a space to work in, proper tools and equipment are also needed.  IT workers can easily grow ‘stale’ relative to the rest of the industry if they don’t keep up to speed on the latest developments every so often, so a training budget and equipment budget should be included as part of the budgeted aspect of an IT position.

Providing a quality work environment shows care, expresses your knowledge of what they require, and demonstrates that you are committed to them.   

Desire for Recognition

I mentioned just before that recognizing a salesperson’s contributions is an easy affair – it can be tied directly to that person’s revenue generation in the form of commissions.  Measuring the contributions of an IT worker is much more difficult – and should be a major effort.  As leaders and managers, we must ensure that we put effort into recognizing the efforts of our IT staff.  This is a particular failing many companies have when dealing with Administrators in particular, because most non-IT persons really only think of the Admins when something breaks.

Quite often, the hardest tasks an IT person performs also end up being the least visible.  Ironically, some of the simplest things they do end up generating the most visible results.  This is enough of a truism that I will often instruct my junior employees that when someone thanks them profusely for something that really didn’t require a great deal of action, they should save those kudos and hold onto them for the time when they really do put a lot of blood and tears into something – because those big heavy tasks are often the ones that “keep the lights on” and the users never even know they happened.

So the question here is how do we find events worthy of recognition in a group whose events are not necessarily widely visible?  There are a few ways to approach this:

  1. Observe events around the world. IT is not restricted to geography any longer, and these people are guarding you from a wide variety of threats as well as building new systems for you.  The worm “NotPetya”, for example – this was a global event that cost in the ballpark of half a billion dollars in the firm Maerck alone.  Did your network suffer from it?  If the answer is ‘no,’ then there is a good example of where your networking and system administration personnel did their jobs well.
  2. Establish a career track for tech employees. Often, firms will have career tracks that have only one lane, and end up in a business office.  Following this track is both limiting – there can only be so many managers – and crippling:  by promoting a successful IT staffer, you can gain a crappy manager at the expense of a good tech.  If, on the other hand, you put together a “graded tech track” with steps that have both titular and compensation benefits, you can establish a clear path of recognition that IT staffers can aspire to and excel in.
  3. Budget for career-based training. This is closely tied to ‘quality work environment,’ but is also a recognition factor – you recognize value in an employee by keeping him/her relevant to moving technology.  Each of your employees has a salary figure and an overhead figure that goes into your budget.  Training costs should be included in that overhead figure – enough to send someone to a week’s training once a year is what I’d advise.  Tell your employees and have them choose what that training money gets spent on.  Inform them to pick something relevant to the job, or sell you on why it is relevant if you don’t see it.  Once they have something, send them.  Give them on-the-spot bonuses for becoming certified in some technology.  Pay for the first exam and maybe even a re-try if they don’t pass the first time.  A classic story about this, I don’t recall where I first heard it:

Manager: “I want to spend $x to send so-and-so to training”

C-level:  “Why?”

Manager:  “Because it’ll make them better employees.”  (Duh.)

C-level:  “What if I train them and they leave?”

Manager:  “What if we don’t train them and they stay?”

Recognition of your staff sends a message – it is clear communication.  It also falls into the zone of commitment, care, and demonstrates that you have a personality that shows long-term integrity.  Not surprisingly, it is also a great source of motivation, so long as it is provided with fairness.

Need for Downtime

Not everyone can be constantly on an A-type personality frenzy bender.  IT persons in particular don’t enjoy stress situations.  They can burn out just like anyone else, and quite often the crises they deal with have a far more strategic impact than most of the rest of the company.  You want to make sure that IT staffers have a clear head when tackling your businesses’ problems, because if they don’t, the repercussions can be more far-reaching than you wanted.  So…provide them with some downtime.  Many startups see this in the form of games (ping-pong tables, foozball, etc.), and the best ones recognize it with time itself.  I recommend giving your staffers a “20% buffer” – meaning that during a given week, they can spend a day’s worth of time researching new stuff, exploring new tech, etc.  Back when I first started in contracting in the 90s, our firm gave us a mandate that 80% of our time needed to be spent on billable action, the remaining 20% was ours.  Building stuff, reading up on new tech, whatever.  This was a really great way to let off some steam, and the team knew not to abuse it.

Some admins are on call 24/7.  Many developers and testers will spend loads of extra hours at crunch-time before a release.  Remember when they have to put in those extra hours, and give them downtime to compensate.

IT people also tend to get lost in tunnel vision very easily, spending far more hours in the office than they should, and this can cost them in their home lives.  When the 5 O’clock hour hits on a regular day, tell them to go home.  Help them keep a good work-life balance.  A burned-out employee who quits the job after two years is of no use to you, and letting them burn out that way negates the need for care that they trust you to have.  You need them to be willing to spend that extra time when it’s needed, and never take it for granted.

Providing downtime again demonstrates you care, that you have the knowledge necessary to lead them, that you are committed to their well-being and long term career, and motivates them to learn more to help them excel in their jobs.

Mediation

Lastly, I want to focus on an aspect of your role binding the entire team together.  At the beginning of this part, I pointed out two separate groups that have conflicting agendas – admins, who wish to maintain a status quo of sorts, and the other actively seeking to change it.  These two groups often come into conflict, and they also come into conflict with other parts of the company.  It might be professional, it may very well be personal friction.  Whatever the cause, if a serious conflict is left to fester it can damage your team irreparably.

Whenever these conflicts arise (and let’s assume you know which ones can resolve themselves successfully and which ones require your intervention), your role becomes that of a mediator.  I strongly suggest you enroll in a mediation communication class, or at least read a few books on the subject (surprisingly a lot of books focused on relationship therapy can provide some insight here as well).  This skill is an absolute must have if you wish to be a leader rather than simply a manager.  A famous line about proper mediation is that when a compromise is found, neither side goes away happy.  However, as a mediator you can at least see to it that the sides also don’t go away mad.

Proper mediation relies entirely on your skill as a communicator, and will strain your listening muscles heavily.  It is, however, a key element in demonstrating to your team that your integrity is of value to them.

Summary

The aspects of leadership play out in a lot of ways – subtle and not so – with IT workers, who are a rather unique bunch.  Their needs and desires

The six qualities of leadership (care, personality, knowledge, motivation, commitment, and communication) all contribute in different ways to meet the unique needs of the IT team.  When you meet those needs for recognition, respect, downtime, fair mediation, and provide these in a quality work environment, you actively use each of the six to help your team.  And as your actions serve as mechanisms for communication, the team will recognize that you are leading, not just ‘managing’ or ‘ordering.’

Part 1

Part 3 (end)

Posted in Business, Business, IT, Leadership, Programming, Software, Work | Tagged , , , , | 1 Comment

Leadership of IT/Software Teams in a Regulatory Environment

Leadership of IT/Software Teams in a Regulatory Environment

Thomas Theobald

Part 1 – What Exactly Is “Leadership”?

leadership

/ˈliːdəʃɪp/

noun

“the action of leading a group of people or an organization, or the ability to do this.”

Not exactly helpful, are they?  Wikipedia is slightly better:

“is a formal or informal contextually rooted and goal-influencing process that occurs between a leader and a follower, groups of followers, or institutions. The science of leadership is the systematic study of this process and its outcomes, as well as how this process depends on the leader’s traits and behaviors, observer inferences about the leader’s characteristics, and observer attributions made regarding the outcomes of the entity led[1].”  (Antonakis, John; Day, David V. (2017). The Nature of Leadership. )

There are thousands upon thousands of different works on the subject of leadership, going back farther into history than the Roman Empire.  What it really boils down to, at its core, is that to lead is to influence others into behaviors, commitments and actions in service of one’s goal(s). 

Let’s dive in a little.  We’re going to explore what it means to exercise “Leadership”, what it takes to gain that influence over others.

First, we must recognize that influence over others isn’t something we can take – it is something that is given.  Whether voluntarily or through coercion, a person chooses to be influenced by someone.  That choice may not be consciously made…it may be something baser, more instinctual, and quite often this is the case whether one is in an office, a political party, a religious revival, or other venue.  It appeals to not only the comprehension of the team, but also to their emotional triggers.

Less a mechanistic approach to guiding persons, leadership really is much more a “way of living” in the context of one’s colleagues.

To exercise Leadership then, is a bit like playing a psychological game, guiding others thought processes to coincide with one’s own.  Just as each person owns his or her own emotions, team members must make their own commitments to the team – a leader finds ways to enable them to make those commitments.  How does one do that?  The United States Marine Corps focuses on six factors that lead to successful leadership:

Care

This, I feel, is the most important distinction between a leader and a manager.  A leader must care about the persons entrusted to them.  That care is demonstrated by actions that follow one’s promises and commitments.  When you care for your team, you engender a protective atmosphere for them, enabling them to feel safer around you – you become their guardian as well as their colleague.

Personality

A specific personality type is not absolutely necessary to exhibit leadership, though some personalities find the communications aspects easier.  I think it may be easier to relate this idea of ‘personality’ rather as sincerity, or genuineness of the leader.  When you are sincere with your team, they know they can place trust in you – another safety factor.

Knowledge

Having boundless knowledge on the relevant topics is very helpful, of course – but is it really necessary?  Not so.  Understanding the limits of one’s knowledge is equally important.  The most important aspect of knowledge, though, is recognizing where it is present in your team and promoting team members’ own knowledge.  When you use your team’s knowledge wherever and whenever possible to enable them to step forward, even if you already can answer the need yourself, you build your team’s confidence in themselves – you promote their ability to excel, strengthening them.

Motivation

Succinctly, motivation in this context is the strength of effort used in the persistent pursuit of one’s goals.  Maslow’s hierarchy of needs shines a light directly on the source of the motivation of every human:  we all share physiological and safety needs; we quite often share the needs related to belongingness, love, and esteem; and we are all usually quite unique in our self-actualization needs.  A real leader finds ways to bind the needs of the team at various levels of the hierarchy together, directing the members towards a common goal – motivating them all by giving them ways to self-motivate.

Commitment

Closely tied to the concept of sincerity I mentioned above with Personality, Commitment means to support the team with no reservations, during good and bad times.  It embraces the team as a living thing, a culture, rather than a tool that can be pulled from a satchel and put away when not needed. When you commit to the team, you are accepting that you work together as a common course of action, which engenders similar action by team members, creating a self-reinforcing group. 

Communication

Finally, we come to communication, the bindings that tie all the others together.  A person simply cannot be a good leader if they are not also a good communicator.  Transparency, clarity of vision, visibility, and receptivity are key facets required of a good communicator.  When you can spell out the goals set for the team clearly, make yourself constantly available to the team, and actively listen to the voices of the team, this skill at communication earns the respect of your team.

Summary

As I mentioned before, there are theories on leadership that go back thousands of years, so one could just as easily pick a definition from any period along the way to compare against.  However, I think it will suffice in this context to use the USMC six traits – surprisingly enough, leadership is a near-universal concept, whether in the uniform of a military officer or the business-casual of a startup CTO.

Notice that each of the six focuses on strengthening the individual members of the team – and tend to de-emphasize the role of the leader her- or himself.  This is not to say the leader should play no role, but rather that a successful leader encourages the team to excel.  The most important take-away for aspiring leaders is:

This isn’t about you at all.  It’s all about the team.

In the next section, we’ll talk about how these apply in a software development and IT arena.

Part 2

Part 3 (end)

Posted in Business, Development, Health, IT, Leadership, Programming, Software, Work | Tagged , , , , | 2 Comments

RAID-5 and the Sky Is Falling

The Situation

I’ve seen, over the last few weeks, more than a few posts on a popular IT hangout site proclaiming in loud, evangelical voice that “RAID-5 is terrible for spinning disks, never use it!  If you do, you’re a stupidhead!” and similar statements.

I’m here to tell you that’s not an appropriate answer.  In fact, it’s tunnel-vision BS.

I’m also here to remind you that RAID is not a backup – it is avoidance of downtime, and it is reliability of storage.  If you are relying on a RAID array to protect you from data loss, you need to add some extra figures to your budget.  You cannot have your production system also be your backup repository.  If you think that you are safe because all your production data is on a RAID, and you don’t bother with a proper backup, you are going to be in deep kimchee when you have a serious issue with your array.

Now, I suspect there is a kernel of truth inside the concern here – it seems to stem from an article written last year whose theme was “Is this the end of RAID-5” or something similar.  That article was quite accurate in its point – that with the escalating size of drives today, and the numbers of them we are using to produce our volumes, it is inevitable that a drive failure will occur – and that during a rebuild, it becomes a mathematical likelihood that a read error will result in a rebuild failure.

All quite true.

But in the reality of many of the conversations I’ve seen the doomsayers trumpeting their end-of-the-world mantras, volume sizes simply do not justify the fear.

Let’s  take a realistic look at RAID fails, and figure out the real numbers, so we can all breathe a little calmer, shall we?

As a goal for this article, I want to give you the ability to calculate the odds of data loss in your own RAID systems when we’re done.

First off, we have to look at the risk we are mitigating with RAID…drive failures and read failures.  Both come down to a small percentage chance of failure, which is best tied to the figures “Annualized Failure Rate” (AFR, which represents the % of drives that die in a year) and “Unreadable Read Error” (URE, which represents an attempt by an array to read a sector and fails, probably due to Bit Error).

Google wrote a paper on drive fails about ten years ago, which showed that drives which don’t die in the first few months of life generally last for five years or so before their AFR starts getting up on about 6%-8%, which is generally considered unacceptable for datacenter or other usage that requires reliability.  As it happens, BackBlaze (backblaze.com) is a DC that publishes its own empirical hard drive mortality stats regularly, so these figures can be updated in your own records using accurate data for the brands of drive you use.

The most current BlackBlaze chart as of the time of this writing can be found here:  https://www.backblaze.com/blog/hard-drive-stats-for-q1-2018/

So let’s begin, shall we?

During this article, I’m going to spell out several different scenarios, all real-world and all appropriate for both SMBs and personal operations.  I have direct and hands-on with each of them, and it is my hope you’ll be able to perform the same calculations for those arrays within your own sphere of control.

Array 1:  4 Western Digital Red drives, 4TB each in a RAID-5 array.

Array 2:  4 HGST NAS drives, 8TB each in a RAID-5 array.

Array 3:  8 Western Digital Red drives, 6TB each in a RAID-6 array. (we’ll also run over this in RAID-5 just to be thorough)

Array 4:  12 Seagate Iron Wolf Pro drives, 10TB each in RAID-6 (as with the above, we’ll hit it at RAID-5 too)

Array 5:  12 Seagate Enterprise Capacity drives, 8TB each in RAID-6 (and RAID-5)

Array 6:  12 Seagate 300GB Savvio drives, RAID-5

Array 7:  7 Seagate 600GB Savvio drives, RAID-5

(Note:  Enterprise Capacity drives have been re-branded by Seagate and now go by the name “Exos”)

We start by collecting fail rates on those drives, both annualized fail rates from the empirical charts at BackBlaze, and the averaged bit-read error rate.  Note that AFR increases with age, high temperature, and power cycles; it lowers for things like using Helium as a filler (despite this making all your data sound like it was recorded by Donald Duck).  The bit error rate figures are drawn directly from the manufacturer’s sites (and can often be found as BER, “bit error rate”), so there will be some ‘wiggle room’ in our final derived figures.

Drive Annualized Failure Rate Bit Error Rate
WD Red 4TB 2.17% 1 per 10e14
HGST NAS 8TB 1.2% 1 per 10e14
WD Red 6TB 4.19% 1 per 10e14
Iron Wolf Pro 10TB 0.47% 1 per 10e15
Iron EC 8TB 1.08% 1 per 10e15
Seagate Savvio .3TB 0.44% 1 per 10e16
Seagate Savvio .6TB 0.44% 1 per 10e16

For reference, the reason why people often follow up the statement “RAID-5 is crap” with “unless you use an SSD” is because SSDs have a BER of around 1 per 10e17 – a BER on an SSD is extremely rare.

With these figures, and with the sizes of the arrays and their types known, we can prepare the variables of the equation we’ll build.

Num:  Number of drives in the array

ALoss:  Allowed loss – the number of drives we can afford to lose before unrecoverable data loss occurs.

AFR:  Annualized Failure Rate (derived from empirical evidence)

URE:  Unrecoverable Read Error, this is the same as “Bit Error Rate” above

MTTR:  Mean time to repair – this will vary depending on your drive sizes, cage controller(s), memory, processor, etc.  I’m going to just plug in “24 hours” here, you can put in whatever you feel is appropriate.

We’re also going to be playing a probability game with these, since we don’t know exactly when something is going to blow out on us, we can only assume statistical probability.  To set the stage, let’s play with a few dice (and that’s something I know quite a bit about, having written a book on Craps some decades ago).  We want to establish the probability of a particular event.

The probability of something is = number of sought outcomes / number of total outcomes

Starting simple, we’ll use a six-sided die.  We want to prepare an equation to determine the odds of rolling a one on any of ten rolls. 

So our sought outcome is 1.  Number of total outcomes is 6.  That gives us 1/6, or 0.1667.

We’re trying ten times, which complicates matters.  It’s not simply additive.  It’s multiplicative.  And when we’re collating multiple independent events, we multiply the odds of each event against each other.  Probability of two events A and B happening together, then, are Prob(A) * Prob(B).  If we were asking “what are the odds of rolling a one on each of ten rolls” it would be pretty easy.  But that’s not the question we’re asking.

The question we’re asking is what are the odds of one or more of the rolls being a one?

We have to invert our approach a bit.  We’re going to start with 100% and subtract the chance of not getting a 1.  If we determine the odds of avoiding a 1 on every single roll, then the chance of getting a roll on any one of our rolls is the inverse of it.  The odds of not getting a 1 when we roll are 5/6, and there are ten tries being made, so (5/6) raised to the 10th.  Then we simply subtract that from 100% to get our answer.

(5/6) raised to the 10th is (9,765,625 / 60,466,176), which is 0.1615 – I rounded a bit.

1-0.1615= 0.8385, which is our result.  The odds of rolling a 1 on any of ten individual rolls is 83.85%.

RAID Types

A little backgrounder on types of RAID for the uninitiated here – and there’s no shame in not knowing, this stuff is pretty dry for all but the total platterhead.  I guess that means I’m a bit of a dork, but what the hell.

RAID means “Redundant Array of Inexpensive Disks” and first became popular commercially in the late ‘80s and early ‘90s, when hard drives were becoming economically a big deal.  Previously, a strategy called “SLED” was considered the go-to model for storage, and it represented “Single Large Expensive Disk”.  RAID took over, because it was a lot more economical to bond multiple inexpensive units into an array that offered capacity equal to a drive which would cost far more than the combined cost of the RAID.

Different RAID types offer different advantages.  Importantly, all of them are considered for use as volumes, just like you’d consider a hard drive.  These aren’t magic, they’re just volumes.  How you use them is up to you.  When you store production data on them, they need to be backed up using smart backup practice.

Most mentions you’ll see regarding RAID include various numbers, each of  which means something:

RAID 0 – this form of raid uses at least two disks, and “stripes” data across all of them.  This offers fast read performance, fast write performance.  Usually this RAID limits its use of any physical drives to the size of the smallest in the group (so if you have three 4TB and one 6TB, it will generally only use 4TB of the 6TB drive).  This RAID also provides the used capacity in full for storage, so 3 4TB drives will make a 12TB RAID 0 volume.  This RAID adds vulnerability:  if any one of the drives in the array is lost, you lose data.

RAID 1 – this is “mirroring”.  It uses an even number of disks (usually just two), and makes an exact copy of volume data on each drive.  They don’t have to be the same size, but the volume will only be as big as the smallest drive.  Benefit is fast reading (no benefit in write speed) and redundant protection – if you lose a drive, you still have its mirror.  It also is fast to create, as adding a second drive only requires that the new drive receive a copy of the other.  The performance benefits are limited only to the speed of the slowest member of the array.  This method gives up 50% of the total drive capacity to form the mirror.

RAID 2 – it’s unlikely you’ll ever see this in your life.  Uses a disk for parity information in case of loss of a data disk.  It’s capable of super-fast performance, but it depended on coordinating the spin of all disks to be in sync with each other.

RAID 3 – Also extremely rare, this one is good for superfast sequential reads or writes, so perhaps would be good for surveillance camera recording or reading extended video tracks.  This also uses a parity disk similar to RAID 2.

RAID 4 – another rare one, suitable for lots of little reads, not so hot for little writes, also uses a dedicated parity disk like 2 & 3.

RAID 5 – this is currently the most common form of raid.  It stripes data among all its drives, just like RAID 0, but it also dedicates a portion of its array equal to the capacity of one of its disks to parity information and stripes that parity information among all disks in the array.  This is different from the previous forms of parity, which used a single disk to store all parity info.  RAID 5 can withstand the loss of any one disk without data loss from the array’s volumes, but a second drive loss will take data with it.  This array type has an advantage in write speed against a single disk, but not quite as good as RAID 0 since it has to calculate and record parity info.

RAID 6 – this basically takes the idea of striped parity in RAID 5 and adds redundancy to it:  this array stores parity info twice, enabling it to resist the loss of two drives without data loss.

RAID 10 – this is actually “nested” RAID, a combination of 1 (striping) and 0 (mirroring).  This requires at least four disks, which are striped and mirrored.  Usually this is done for performance, and some data protection.  It’s a little bit more protected than RAID 5, in that it can withstand the loss of one drive reliably, and if it loses a second, there’s a chance that second drive won’t cause data loss.  However, this one gives up 50% of the total drive capacity to the mirror copies.

There are also a series of other nested forms of RAID, but if you need those you’re well past the scope of this article.

Parity

Credit: Wikipedia

In RAID terminology, “Parity” is a value calculated by the combination of bits on the disks in the array (most famously an XOR calc, but different vendors can stray from this), which generates a bit value which is recorded in the “parity” bit.

In the image here of a RAID 6 array, the first bit of stripe A’s parity would be generated by taking the first bit of each A1, A2, and A3, and performing a sequential XOR calculation on them.  This would produce a bit that is recorded on both Ap and Aq.  Later, if a disk fails – say Disk 0 bites it – then the system can read the data from the bits in A2, A3, and Ap or Aq to figure out what belongs where A1 used to be.  When a new drive replaces the failed Disk 0, that calculation is run for every bit on the disk, and the new drive is “rebuilt” to where the old one was.

There’s also an important point to be made about the types of parity you’re looking at in that image.  There are multiple ways to calculate the parity bit that is being used.  In RAID 5, the most common is an XOR calculation.  In this method “bit number 1” on each data stripe is XOR’ed with the next one, and then the next, etc. until you reach the parity stripe and the result is then recorded there.  Effectively this is a “horizontal” line drawn through each disk, ending in the parity stripe.  So when you need to know what was on that data disk (whether rebuilding or just reading), it can be re-constructed by backing up that XOR equation.

And then…the gods rose from R’lyeh to lay down RAID 6 parity.

Most RAID-6 uses an encoding method for its extra parity called “Reed-Solomon” (this method is used in a lot of data-reading applications, like barcode scanners, DVD readers, and low-bandwidth radio data transmission).  This method manages to record parity against a second missing piece using other data in the array – RS encoding builds its parity using an algorithm that generates something like a scattergram of source bits, both vertical and horizontal (which makes it resistant to the loss of a second data disk – if it just copied the XOR result of the first disk then a second data disk would corrupt the intent).  I’m not going to pretend I understand the Galois Field and other heavy-duty math behind this stuff, I just know it exists, it is commonly used for RAID-6, and Dumbledore or the Old Ones were probably involved somewhere along the way.  It costs more CPU- and IO-wise, which is why it isn’t commonly used in RAID-5.

(I say “Most” RAID-6, because other vendors can use different methods – for example, Adaptec has their own proprietary algorithm in their hardware controllers, different from RS, but the functional result to us as users is the same.)

Data Loss

What is it that takes us into data loss territory?  Obviously, dropping the entire cage while powered up and running will get us there fast.  Let’s make the assumption that if something along those lines were to occur, you’d have an entirely different set of problems, and you wouldn’t have time to be perusing this article. Instead, we’ll focus on natural wear-and-tear.  To get to data loss, there are three steps:

  1. Initial drive failure(s), and…
    1. Enough drive failures in any point before preservation exceeding our acceptable loss

Some other topics we’ll talk about:

  1. Possibly drive failure during rebuild (I’ll tell you towards the end here why you should have caution before starting that rebuild)

…and/or…

  1. Read error during rebuild (this is why 2 will require caution)

This brings me to a very important point, and one around which this entire discussion revolves:  protecting your data.  I think the entire “RAID-5 is poopy” argument stems from the forgetfulness that one must never rely on RAID levels as the only protection of your data.  RAID serves to make you a nice big volume of capacity, and protects your uptime with some performance benefits.

It does not magically provide itself with backup.  You have to back it up just like anything else.

So if you’re creating a 3TB array, get something that can back that array up and has the capacity on reliable forms of storage to keep your data safely.

Drive Failure

First Failure

The initial drive failure is a compound figure of the AFR by the number of drives, and we’ll figure it on an annual rate. This part is pretty simple, let’s go back to our dice equation and substitute drive values:

Drive Loss Rate = what are the odds at least one drive will die in a year?

If it’s just one drive, that’s easy – use the AFR.

But it’s multiple drives, so we have to approach it backwards like we did with dice rolls.

So it’s 100% minus (1-AFR)eNumberOfDrives.

For my Array 1, for example:  those WD’s have an AFR of .0217.  Plugging this into the equation above yields:

100% – (1-AFR)e4 = 100% – 91.59% = 8.41%

So I have about an 8.41% chance of losing a drive in a given year.  This will change over time as the drives age, etc.

Drive Failure Before Preservation

So let’s assume I lost a drive.  I’m now ticking with no redundancy in my array, and what are my chances of losing another to cause data loss during the window of time I have to secure my data?

This one is also pretty simple – it’s the same calc we just did, but we’re doing it only for the gap-time before we preserve the data and for the remaining drives in the array.  Let’s use two examples – 24 hours, and two weeks.

24 hours:  1 – (1-(AFR * 0.00273))eN

Where AFR is the AFR of the drive, N is the number of drives remaining.  The 0.00273 is the fraction of a year represented by 24 hours.

2 weeks:  1 – (1-(AFR * 0.0384))eN

0.0384 is the fraction of a year represented by 2 weeks.

If it’s my Array 1, then we’re working with WD reds which have a 0.0217 AFR.  I lose a drive, I have three left.  Plugging those values in results in:

24 hours:  1 – (1-(0.0217 * 0.00273))e3 = 1 – (0.99994)e3 = 0.0001777, or 0.01777% chance of failure

2 weeks:  1 – (1-(0.0217 * 0.0384))e3 = 1 – (0.99917)e3 = 0.002498, or 0.2498% chance of failure

We now know what it will take for my Array 1 to have a data loss failure:  8.41% (chance of initial drive failure) times the chance of failure during the gap when I am protecting my data.  Assuming I’m a lazy bastard, let’s go with 2 weeks, 0.2498%.

That data loss figure comes out to be 0.021%.  A little bit more than two chances in ten thousand.

Based on that, I’m pretty comfy with RAID-5.  Especially since I take a backup of that array every night.

Unrecoverable Read Error

This figure is generally the one that strikes fear into people’s hearts when talking about RAID-5.  I want to establish the odds of a read error occurring during the rebuild, so we can really assess what the fearful figure is:

What is Bit Error Rate?  In simple terms, BER is calculated as (# of errors / total bits sent-read).  Let’s find a way to translate these miniscule numbers into something our brains can grok, like a percentage.

To start, we’re reading some big quantities of data from hard drives, so let’s bring that into the equation too – there are 8 bits in a byte, and 1,000,000,000 bytes in a Gigabyte.  Add three more zeroes for a Terabyte.

Be aware that some arrays can see a failure coming, and have the ability to activate a hot-spare to replace the threatened drive – most SAN units have this capacity, for example, and a lot of current NAS vendors do as well.  If yours can’t, this is where you should be paying attention to your SMART health reports, so you can see it coming and take action beforehand.  Usually that action is to install and activate a hot-spare.  If you have a hot-spare and it gets activated, it receives a bit-for-bit copy of what’s on the failing drive, and then is promoted to take over the position of the failing disk.  This avoids rebuild errors and is much faster than a rebuild, but it doesn’t protect from BER, so if there’s a bit error during the copy then the incorrect bit will be written to the new drive.  This might not be a big issue, as many file formats can withstand an occasional error.  Might even be that the error takes place on unused space.

Rebuilds of an array are another case entirely.  The time required is much greater, since the array is reading every single bit from the remaining stripe data on the good drives, and doing an XOR calc using the parity stripe to determine what the missing bit should be, and writing it to the new drive.  During a rebuild, that bit error poses a bigger problem.  We are unable to read, ergo we can’t do the XOR calc, and that means we have a rebuild failure.

(If we’re in RAID-1, by the way, that’s a block-for-block copy from a good drive to the new drive – bit error will end up copying rather than calculating, so there won’t be a failure, just bad data.)

If we had a hot spare, we’d be out of the woods before having to rebuild.  But let’s keep looking at that rebuild.

Translating that BER into how likely we have for a rebuild failure…the math gets a little sticky.

UREs, just like drive fails, are a matter of odds.  Every bit you read is an independent event, with the odds of failure being the bit-read chance that we collected about the drive.  The probability equation comes out looking like this:

Let’s apply the probabilities we started with at the beginning of this article to the drives in my Array 1 now.  A reminder, these are WD Red 4TB drives.  Western Digital sets a BER value of 1 per 10e14.

Array 1 blows a drive.  I’ve got three left, and a new 4TB I popped into the array.  I trigger the rebuild.  We’ve already said 24 hours, so we’ll stick with that (technically it’s closer to 10h for a 4TB, but big deal).

Edit 10.10.2018 – I have identified a mistake in my calcs here courtesy of the Spiceworks forum.  Parity data is being read from more drives than I originally laid out.  by the time you read this, the information below will have been corrected.

My array now has to perform three reads (two data and one parity) to get each value to be written to the new drive – a read on the stripe, and a read on the parity stripe.  So I’m actually reading twice the volume of the target drive.

4TB is 4,000,000,000,000 bytes.  Three times that is 12,000,000,000,000.  8 bits per byte means 96,000,000,000,000.  Which is a crap-ton of bytes.

However, 10e14 (the BER of our WD drives) is 100,000,000,000,000.  That’s an even bigger crap-ton.  Not that much bigger, but bigger.

So let’s ask the question, and plug in the numbers.  The question:

During my rebuild, what are the odds of rolling a mis-read on any of my 96,000,000,000,000 reads?

As before, let’s invert this question and ask instead, what are the odds of not rolling a mis-read on every one of our reads? and then subtract that from 1.

Odds of successful read on each of these reads is 99,999,999,999,999 / 100,000,000,000,000.  We’re trying 96,000,000,000,000 times.  Most of our PCs can’t raise something to the 96-trillionth power, I’m afraid.  Even Excel’s BINOM.DIST will barf on numbers this size.  You’re going to need a scientific calculator to get this done.

1 – (99,999,999,999,999/100,000,000,000,000)e96,000,000,000,000 =

1 – (.99,999,999,999,999)e96,000,000,000,000 =

(now you’re going to have to trust me on the following figure, I got it from the scientific calculator at https://www.mathsisfun.com/scientific-calculator.html)

1 – 0.38318679500580827 = 0.6168132049941917

So the odds of a BER giving my Array 1 a bad case of indigestion is 61.68%.  That’s a pretty scary figure, actually, and I’ll get to the mitigation of it later.  It’s this kind of figure that I think generally gives people enough of the willies to make that crazy “RAID-5 is for poopyheads!” proclamation.  Very likely because the people who make that claim assume that this is the end of the road.

Thankfully, we’re looking at odds of data loss.  Not necessarily rebuild failure, though that does factor into the odds of loss.

The Equation for Data Loss

In order to have loss of data, basically we have to lose a number of drives that our array cannot tolerate, before we can protect or preserve that data.

Let’s say that window of time comes out to two weeks.  That’s probably a lot more than we need, so it will inflate the odds to a conservative number.  Two weeks is 336 hours, .038 of a year.

So given that, the basic odds of data loss are:

For RAID-5, we need to lose a second drive for data loss.  That means odds of Initial loss * odds of another loss during window (remember that these are multiplicative, not additive).  If all the arrays I mentioned above were RAID-5, and using the “lazy bastard” two-week window, here’s where we’d be:

Array and # drives Drive Type Annualized Failure Rate Odds of Initial Loss Loss during Window Total Chance
1 – (1-AFR)eN 1 – (1-(AFR * 0.0384))e(N-1) Initial * Window Loss
Array 1 – 4 drives WD Red 4TB 2.17% 1-(1-.0217)e4 = 8.41% 1-(1-(.0217*.0384))e3 = 0.25% 0.00021, or 0.021%
Array 2 – 4 drives HGST NAS 8TB 1.2% 1-(1-.012)e4 =

4.7%

1-(1-(.012*.0384))e3 = 0.138% 0.00006486, or 0.0065%
Array 3 – 8 drives WD Red 6TB 4.19% 1-(1-.0419)e8

= 28.99%

1-(1-(.0419*.0384))e7 = 1.12% 0.003249, or

0.3249%

Array 4 – 12 drives Iron Wolf Pro 10TB 0.47% 1-(1-.0047)e12 =

5.94%

1-(1-(.0047*.0384))e11 = 0.216% 0.0001283, or

0.01283%

Array 5 – 12 drives Iron EC 8TB 1.08% 1-(1-.0108)e12 =

12.217%

1-(1-(.0108*.0384))e11 = 0.455% 0.0005559, or

0.05559%

Array 6 – 12 drives Seagate Savvio .3TB 0.44% 1-(1-.0044)e12 =

5.154%

1-(1-(.0044*.0384))e11 = 0.1857% 0.0000957, or

0.00957%

Array 7 – 7 drives Seagate Savvio .6TB 0.44% 1-(1-.0044)e7 = 3.04% 1-(1-(.0044*.0384))e6 = 0.1013% 0.0000308, or

0.00308%

I think the values above show definitively that RAID-5 is a perfectly viable storage mechanism.

RAID-6 Enters the Fray

With RAID-6, we’re now adding a second parity stripe distributed among the disks of the array.  In order for this type of array to fail, we have to have a third disk die during the window.  I won’t repeat the entire set of equations, because that would be a pain in the ass.  Basically, we’re adding a new column, called “Second Loss During Window”, which has the exact same formula as the “Loss During Window” one.  The only difference is that the exponential is one less.  Once we get the result of that column, we multiply it with the Initial Loss and Loss During Window to get the real figure of data loss.

Array and # drives Drive Type Annualized Failure Rate Odds of Initial Loss Loss during Window 2nd Loss Total Chance
1 – (1-AFR)eN 1 – (1-(AFR * 0.0384))e(N-1) 1 – (1-(AFR * 0.0384))e(N-2) Initial * Window Loss
Array 1 – 4 drives WD Red 4TB 2.17% 1-(1-.0217)e4 = 8.41% 1-(1-(.0217*.0384))e3 = 0.25% 1-(1-(.0217*.0384))e2 = 0.16% 0.0000003364, or 0.00003364%
Array 2 – 4 drives HGST NAS 8TB 1.2% 1-(1-.012)e4 =

4.7%

1-(1-(.012*.0384))e3 = 0.138% 1-(1-(.012*.0384))e2 = 0.092% 0.0000000597, or 0.00000597%
Array 3 – 8 drives WD Red 6TB 4.19% 1-(1-.0419)e8

= 28.99%

1-(1-(.0419*.0384))e7 = 1.12% 1-(1-(.0419*.0384))e6 = 0.9615% 0.00003122, or 0.003122%

As you can see, even if you’re a lazy bastard your chance of data loss in the window of vulnerability, RAID-6 makes the odds of data loss vanishingly small.

Failure Mitigation

So you had a drive blow out in your RAID-5 or -6 array, and you’re staring at the column of Loss Window now, wondering what to do.

The most important action you can take right now is this:

CALM DOWN.

You haven’t lost data yet.  But by hasty action, you might.  Stop, breathe.  Do NOT touch that array, and do NOT power it down just yet.  If one of your disks has checked out of the hotel, when you reboot the cage, there’s a chance it could “unrecognize” that disk and re-initialize the array, blowing your data into never-never land.

Steps to take here:

  1. DO NOT STUFF A NEW DRIVE IN THE ARRAY AND REBUILD. NOT YET.
  2. If you haven’t done so already, write down your RAID configuration. Include total capacity, disk types, stripe size, drive order, partitions/volumes and any other details you can get.
  3. Can you isolate the array from users? If you can, do it.  Get their IO off the array if possible.
  4. Check your backups and confirm that you have a backup of the array’s data.
  5. Get another volume online that has capacity at least equal to the total used space on the degraded array. One of the easiest methods of doing this is a USB 3.0 drive cradle and a set of SATA drives.
    1. Copy all your data from the array onto this volume and confirm that it is valid
  6. If you can affirm that 5.a is done and good, proceed
  7. Are all the drives in the cage the same age? If so, get replacements for all of them and start a completely new array with the new ones.  Retire the old drives.
    1. Reason for this is that they have all experienced similar wear-and-tear, and they all probably come from the same batch made at the factory – if there is a defect in one, there’s a good chance that this defect applies to all of them. You’re better off just dropping them all and replacing them.
    2. If they aren’t the same age, just note the ones that are, and plan to replace them asap.
  8. Okay, if 4 is good and 5 is good, NOW you can do a rebuild if you feel you have to. I still recommend reinitializing completely fresh and restoring the copied/backed up data, but I also recognize that convenience is a big draw.

Part of the whole debate about the validity of RAID-5 tends to stem from the probability of failure during a rebuild – which can be unacceptably high with old disks of appreciable size (see my section on UREs above).  The argument seems to make the assumption that the array is either not backed up, or is somehow on critical path for general use by users.

Rebuilding an array while live and in production use should be considered a last resort.  You can see above that there is a high likelihood of failure even from reasonably modest size arrays.  The fact that current RAID vendors offer live-system rebuilds should be considered a convenience only at this point.  When we were using 100Gb disks, a live rebuild was a viable option, but that simply doesn’t fit any more.

If your array is in that position – critical path and not backed up – then you have a big problem.  You need to get a backup arranged yesterday.  And if it is critical path, then you should ensure that there is a failover plan in place.  Never assume that just because you have your critical data on RAID that you are totally safe.  You are safer in the case of a drive fail, yes, but you aren’t out of the woods.

Stuff to consider that will help you survive an array failure:

  • Buy a USB cradle or a tape drive that can handle the capacity of your RAID array. Use them religiously to preserve your data.
    • Test them regularly (monthly is good) to ensure that when a fail does happen, you’re prepared to recover.
  • Consider a second array, or a big-ass disk that you can house next to the array, of similar capacity that you can set up on a synchronization system (for example, Synology has a “Cloud Station Server” and “Cloud Synch” apps that can be used to ensure one NAS maintains exactly the same content as the other). That becomes your fail-over.
  • Unless you absolutely have to, do not rely on the use of a live rebuild to preserve your data.
  • If you have room in your cage, add another drive and convert your RAID-5 to RAID-6 to buy you extra insurance against multiple drive failure.
  • Smaller volumes are better than big ones – you can shovel smaller volumes onto a USB drive more easily than trying to subdivide one large one onto multiple removable drives.
  • When filling up an array, buy disks of the same brand and capacity, but mix up who you buy them from or buy them over time to protect you from factory batch errors.

Summary

There’s no “magic panacea” here with RAID systems.  They’re great, they’re effective, and there are simply some things that they do not do.  I hope that I have helped dispel some of the fear about RAID-5 here, and it is also my hope that I have perhaps called attention to any gaps in your data coverage so that you can fill them now rather than wait for the inevitable to occur.  With luck, you can breathe a little easier now, and not be too harsh on RAID-5.

Feel free to write me with any questions, comments, death-threats, or mathematical corrections you might feel necessary.  Meanwhile, happy computing.

Edit 13.08.2018:  I whipped up the figures into a spreadsheet that you can download and use for your own arrays as well.

Edit 10.10.2018:  edited for clarity, and corrected math on UREs.  Also corrected spreadsheet which is linked below.

Array Life Expectancy

 

Posted in Business, Disk Management, Hardware, IT, PC Stuff | Tagged , , | 1 Comment