An interview with Michael Ludwig ( @blazent ) at #Know17


Hello and welcome to IT Chronicles 10 in Tech
coming from Knowledge17. I’m Shane Carlson and my co-host Carlos Casanova. We’re here with Michael Ludwig from Blazent. Thank you for being on the show here today. Appreciate you attending. Tell us a little bit about what you guys are
doing with Blazent right now in terms of discovery, the CMDB and everything that you guys are
doing around data quality. Sure. One of the important things to realizing the
value of a ServiceNow implementation is to get the CMDB right because so many things
live downstream of it. We’re really acting as a data quality management
layer fronting ServiceNow, bringing in best-of-breed tool sets from lots of customer implementations,
doing data quality normalization, putting all of that data together, and the automating
the process of both populating CIs into the CMDB and then maintaining the attributes around
those CIs as they live through their life cycles. There’s always that perception of we’re
going to solve everything, we’re going to discover it, everything’s there because
it’s magically there and it’s all magically correct. Talk about that a little bit because that’s
sort of your input where you guys have to deal with what’s coming in from that discovery. How do you deal with what’s not there in particular? Sure. There’s really two aspects of that. One is some of the issues that evolve around
your understanding of what discovery is capable of doing, and so, it’s reading a lot of
cases directly out of the BIOS and in other cases looking farther down into the trees. The values that are written there are not
necessarily coherent values. A lot of times there’s things like ‘to be
filled out by manufacturer.’ Importing those values into your CMDB does
absolutely no good. There’s that aspect, and then there’s the
aspect of really important data that needs to be populated into the CMDB that’s not
discoverable. We’re talking about things like owned by,
managed by, supported by, support group, all of those really important things from a governance
perspective that are not discoverable items, so you have to look to other data sources
to bring that data in and marry that up. How do you pivot the data quality story? When you look at it on the surface, everyone
understands data quality is a good thing, but how do you pivot that conversation with
a lot of your clients, especially your bigger clients in translating that into real value
and real dollars for the organization? In a nutshell, it takes one severe outage
and what the loss revenue and what the severe outage is to justify the relatively small
dollars that are involved in maintaining data quality. Some of our tier one customers lose $3 to
$4 million a minute when they have a significant outage. And so, taking care of that on the front end,
making sure the CMDB contains quality data that’s up to date, and then letting the
downstream activities feed off of that is just a natural progression. I can definitely see in a case of a large
global services providers that’s managing asset data and CMDB data for hundreds or thousands
of customers where that could come in valuable, especially for things like how they bill their
customers, they’re around the quality of the data for the assets they touch. Are you guys doing a lot of that? Sure. As a matter of fact, we actually see two sides
of that. We work with some very large MSP and MSI players
in the space, so we do a lot of looking at what they’re billing and what the life cycle
of the items they’re actually billing are and then rectifying the differences between
those, and then from an enterprise perspective and also an MSI perspective, we’re looking
at the MSPs that are operating in the space, are they billing correctly, are they maintaining
the SLA levels that they stated in the contract, and we’re actually able to measure all those
things from an operational perspective as well. In many cases, are you finding big discrepancies
where they’re losing revenue? Always. It’s not really from a pattern perspective;
it can happen on either side. Either I’m not billing enough from an MSP
perspective or I’m over billing. Some of that lives case to case. In certain cases, I’m billing for more than
I should be, in other cases, I’m not billing enough. I think a lot of people get confused. They think data quality and that’s everything,
but it’s really not. It’s data quality, it’s integrity – there’s
a lot of aspects to that. That’s where you guys live, right? Yes. It is where we live. One of the aspects that’s probably the most
hidden aspect is that people don’t understand that there’s a temporal nature to the data
as it flows through the system. Certain tools touch certain CI types at certain
times and not continually. Understanding which of your tool stack is
the most recent set of data available is really where it starts to begin, and the reconciling
the temporal differences between that data is a challenge in its own right, but really
important to maintaining the integrity of the CMDB. I think you and I have touched on the basis
of you get that snapshot . There’s a lot that happened between the two snapshots and it’s
like ‘it all looks perfect’ and nothing changes. No, there’s a lot that happened in between
there. That’s right. If you’re looking at things, more mature things,
like I’m looking at compute rationalization in a data center for example, if we’re streaming
data continually and we can show you aggregate averages of CPU utilization, memory utilization
and those kinds of things, where if you’re just looking at snapshot data, its point in
time is useless for trying to determine that. If they’re running that at 3 o’clock in the
morning so it’s not to increase load on the network and the devices, they’re not going
to see a lot of load and utilization in those time frames if that’s not what their business
is. What’s the biggest surprise that a lot of
your customers get? You walk into somebody who thinks they have
matured, they’ve been doing save discovery with ServiceNow for a while and they’ve
got a lot of asset data in their system. Is it often surprising to them when you guys
run your tools in their system and show them how bad the data might be? Surprise is probably not the acronym that
they would use. It’s usually a little more negative because
they didn’t realize that they were in bad a shape as they actually are. That initial visibility tends to be shocking. It isn’t the end of the line. It’s a retrenchment so now that we understand
where we’re at, now we need to have better processes, we need to have better rules, we
need to be able to have more efficient workflow. It can be a catalyst for really positive change,
but the initial shock is this doesn’t feel so good. I think from a customer side of view, the
question would be, is this a one-time thing, do you just come in and run this once and
fix the data and everything is fine, or is this a constant plan of attack that you have
to have to maintain the data quality? No. There’s two pieces to that. It’s never done ever. Technology evolves faster than the speed of
light, so change is inevitable, but that’s why automation of that process is really important. We have a forward facing precedence rules
engine that really sets that automation in process and once you get that dialed in pretty
well, you can have some very sophisticated rules that can actually manage hiccups along
the way and still end up with the right things happening downstream. I don’t know if you’ve done a lot of metrics
around, but what is that ratio? Is it 20% accurate when you first start? I don’t want you to divulge anything that
maybe will get you guys in trouble, but is it that low? Is it 99%? It varies significantly from customer to customer. It depends on the maturity of the process,
what their prior solution set was working for them – there’s a lot of things that
go into that. Probably on an average, it’s in the 60%
area when we first engage and up in the high 90s is where we end pretty much. That’s huge, especially in this day and
age. I look at it from especially a security perspective
in this day and age. For years, we’ve always wrapped up things
where it’s just an unauthorized change and no big deal and fix it. In this day and age when they’re on the network
for that much longer, those little tweaks might be much more significant. They are significant in a sense that if you
really look at what occurs with most of the major security breaches, it’s lack of due
diligence on part of the organization. It’s because they just don’t have the
capabilities either from a technology standpoint, and a lot of times from a personnel standpoint,
to be in a position to cover themselves. For example, a logical association between
known vulnerabilities and what I have operating in my environment, what are my patching levels
at, where am I at with those kinds of things. They don’t have visibility in that. I think back to all the battery recalls and
things of that nature and getting specific model numbers down and serial numbers. Thank you very much for joining us today. Really appreciate you and I wish you guys
a lot of success. It was my pleasure. Thank you. Thank you, Michael.

, , , , , , , , , , , , , , , , , , , , ,

Post navigation

Leave a Reply

Your email address will not be published. Required fields are marked *