Jay Kidd Insight video
0 (0 Likes / 0 Dislikes)
Now, you all remember the IBM PC.
This technology changed IT forever.
I assume most of you remember
– you look old enough.
It changed IT forever – right?
Thirty-four years ago – it brought affordable
computing into the hands of small businesses,
of departments and of individuals.
It enabled a level of insight and analysis
into business processes and it produced
significantly better efficiencies.
It enabled innovation and resulted
in better customer experiences.
The IBM PC freed individuals from the constraints
that existed in IT at the time and allowed
them
to independently pursue
ideas for their business.
And technology, which has this kind of
a liberating effect, is unstoppable.
PC’s spread throughout the organization of the
1980’s – they were the smartphone of that era.
They were the must-have device -- probably
the first real viral technology
-- and business applications got created.
You remember Lotus?
You remember dBASE, Ashton-Tate?
Companies evolved to run their businesses on
networks of PC’s often running a rather tenuous
structure of fairly ad hoc applications.
And what we learned is that with great
freedom comes great responsibility.
Because unmanaged freedom can result in anarchy.
You all clearly have had the
experience of a PC that was running
under Mary’s desk and somebody
kicked the plug out and suddenly
you couldn’t take orders, you
couldn’t ship product.
And who knew that was in the center
of a critical business process?
But the PC explosion -- really for
the first time -- it taught IT
that they couldn’t stop the flow of
innovative technology into the enterprise.
And they were struggling with -- how do we strike
a balance between embracing these technologies,
empowering individuals yet also assuring
that there’s a discipline that enforces
and assures continuity of
operations to the business.
And it took a little while but IT gradually
figured out that it wasn’t about the
device -- it was about the data.
And if you control the data,
that the PC’s were creating, transforming
and producing, then you could
control the business.
IT then became a point of aggregation,
of protection for stewardship,
for governance of the data.
The network storage industry was born.
And NetApp played a big role in the early
days of this industry delivering technology
in nineteen ninety-eight, which allowed
PC’s in a Window’s environment
to share data with many computers
in a UNIX environment.
Microsoft got into the game in a big way
in the early two thousands embracing SAN,
creating a bridge from the PC world into the
traditional data center operations world.
LINEX matured.
And a viable alternative for mini-computer UNIX
were running on PC hardware technology emerged
and it changed the economics of IT.
And, over the last twenty-five years,
PC technology came to dominate enterprise IT
and it now represents in excess of
seventy-five percent of the compute power
that exists in the enterprise.
But all along that way we learned and we
continued to have reinforced that the
devices that ran IT were disposable.
But it was the data that was durable
and the data that really mattered.
Now, just as PC’s liberated individuals
at that time – enabled them to
build applications on their own
– the Cloud options that are in the market
today are replaying this revolution.
So five or six years ago if somebody in a
business unit had a great idea – something
that’s going to double sales in six months
– we’ll be able to get a much
better yield on our returns
– I’ve got this great idea and I
want to build this application
– they had to go beg for budget, they
had to convince IT to buy servers,
they had to deploy some storage,
they had to build a team,
they had to license software
and they had to spend months building this
application that may or may not actually produce
the result that they hoped
for.
It’s a soul-sucking experience.
But today a twenty-two year old
with a little knowledge of GitHub
and a credit card can open up a hundred
to two hundred VM’s on Amazon,
download some open source software,
write a few thousand lines of Python,
load Mongo and build a massively
scaled enterprise application.
Not only without IT’s permission but
without even them knowing about it.
And this is a really cool level of innovation,
but it’s also scary.
So just as with the PC thirty years
ago we’re again faced with the challenge
of how do we as an IT community
embrace the power of the innovation
-- the capabilities of this new technology
-- and balance it with the need for discipline
of operations to assure continuity?
This is a tremendous opportunity for IT
and the community of IT to lead because,
if we ignore the Cloud
-- pretend it doesn’t exist
-- just tell people not to use it,
what will spring up is a shadow IT organization
where well-meaning people
will develop applications
with very little understanding or very
little interest in data protection,
recoverability, compliance,
regulatory pressures.
But if as the IT community
we embrace this technology
we can turn a mob into a march,
turn a riot into a regimen
and apply a balance that embraces an innovation
yet it brings discipline to the business
to take advantage of these latest technologies.
But this changes the role of
IT in significant ways.
It’s no longer only about building
infrastructure and running data centers.
It’s about marshalling the tools and
the applications that acquire,
transform, apply and protect the
data that runs the business.
And in doing that in a Hybrid Cloud environment
spanning multiple IT deployment models.
And to do this demands a different
view of your enterprise data.
George described the NetApp
data fabric on Tuesday
and how it can create a unified
set of data management,
data transport and data access
services that spans multiple Clouds.
How it can give you a consistent view of
how your applications access your data
and make it accessible across multiple Clouds,
without having to redesign applications
to run in different Cloud environments.
It gives you a lot of choice and a lot
of control and negotiating power
with the Cloud suppliers you work with.
And the implications of the data fabric
that spans Clouds are pretty profound
because it will allow you to think
about your overall Hybrid Cloud,
your extended IT environment as
an integrated infrastructure.
And your decision about where to
place applications won’t be bound by
compatibility of that application
with particular PAZ layers or
technologies to act in a given Cloud.
It will instead be focused by what service
level can I get from that particular Cloud,
or that particular IT deployment model.
Because some Clouds, especially private Clouds,
very well-suited for those workloads
that you absolutely must control.
When the CEO asks why ERP is
down, the answer is “Oh,
something happened in the
Cloud” is not a good answer.
There’s things you absolutely have to control
that are pivotal to running the business.
But there’s other IT deployment
models that are really well-suited
-- when you want to do an
experiment, you want to innovate,
you want to develop software,
or you’ve got a global-scaled application
that may scale up and down where you’ve got
variability of consumption of resources
-- private Clouds are lousy at that.
The NetApp of fabric gives you a unified view
of your data across these different Clouds,
these different IT deployment models.
So what are the implications of this?
What will the world become
as this becomes complete?
Imagine never having to build
another data center.
Now, in IT and in every company,
you all have lots of applications.
And all of them take up space,
they take up power and probably most
critically they take up mindshare,
increasingly scarce mindshare from
skilled IT operations folks.
But unlike your children which you love equally
these applications aren’t
all equal in your mind.
So where do you start? I mean sometimes
to refer to these as “craplications”
-- and where do you start in moving
some workloads to the Cloud?
So a great place to start
is you’ve got applications
that are a little less important than others.
If you don’t have a DR scenario,
but you feel like you need one, start
with standing up disaster recovery
for those applications in the Cloud.
And using the NetApp data
fabric you can make a copy,
you can move a copy of the data into the
Cloud either into NetApp private storage,
close to the Cloud or to Cloud
on Tap residing in the Cloud,
set up the virtual machine so they can run
in the Cloud in the event of a disaster.
But you don’t have to modify the
application to run in the Cloud
– you just have to move it there.
As you get comfortable with the DR
scenario working and you can test it
and do a bunch of things in the Cloud,
as you get comfortable, you can then move
the primary instance of those less
important applications off into the Cloud
-- freeing up space in your own data center.
And then, by having the same data access
methods and data services so the applications
can run the same way in the Cloud
as in your other data center, that makes it much
simpler to just move the applications to the
Cloud and not re-write them for the Cloud.
I believe that the Hybrid Cloud will
allow CIO’s to run applications
they don’t care about on infrastructure they
don’t own run by people they don’t have to hire,
and that’s a compelling solution.
And it allows them to focus their very precious
and scarce internal data center
resource as their private Cloud
and the mindshare of their team on those
applications they absolutely must run themselves
so you’ll never run out of space in
your data center because you can always
spill out to the other environments.
Another implication is imagine if you never had
to say “no” to a great new application idea.
Now, the business unit’s always coming
up with here’s the new application.
It’s going to double sales.
It’s going to significantly impact the
business. I really want to go build this.
And it’s been hard to do that. You can’t marshal
the resources for it but what if I take and
broker a service to enable these innovators
in the business units to get
access to as much compute power
as they needed for developing
these applications?
You could give them an environment that
includes a replica of the production
data center or a portion of it that they may need.
With common access methods to that data is
what they can run within your own data center
using Cloud on Tap or using
NetApp private storage.
With technologies that they also could
use internally if they want to make
multiple copies of the database
so that multiple developers can work in parallel,
they can clone the data sets.
They want to take snapshots
of their work in progress.
They want to take advantage of
the storage efficiencies.
All those capabilities will be there.
And then what if you could apply as
much compute power as you could lay
your hands on and effectively use
to parallelize the QA process –
the testing of the application?
Could that actually shorten the
application release time?
And then what if these applications
were designed in an environment
-- we had a common set of services that would
let the application run in the Cloud
or on premise depending on what made sense?
Maybe if you want to start out with this
application that promises great things,
let it run in the Cloud. If it turns
out to be a flop as seventy percent
of IT projects may turn out to be,
you just spin everything down and
you really haven’t lost much.
If it runs at lower scale than you
expected, running in the Cloud may
be the most economic way to do it.
But if it runs away, if it
takes off and grows huge,
moving it into a more economical
alternative – maybe on premise,
maybe in a different Cloud
could be the way to do it.
So I’ve heard often that Cloud can
significantly reduce the cost of failure
but it can also increase the cost of success.
So you want the flexibility to place the
application where it most
makes sense economically.
So could this technology accelerate development?
Could it lower your cost of applications?
Also, could it let you try more things?
To fail more and, when you
fail, you fail cheaply.
And, when you try more, you succeed more.
So these two ideas are examples of
ways to improve the efficiency
or accelerating innovation
in the IT environment.
But think of what could be possible
if you could tap the scale
and the power of the public hyper-scale or Clouds
using traditional enterprise
application technologies
and without necessarily having to be
completely reliant on newer born in the
Cloud application tools and technology.
So imagine building an application to
collect data -- say it’s from people,
from processes, from things in an
internet of things type application
-- that application could be as simple as a single
VM or a couple of virtual machines that can run,
that can talk to the people
that process the things,
gather the information and store that
either in a database running in Cloud
on Tap or at a set of files.
And, if you want to scale that application,
you may start small with a couple of
virtual machines doing collection.
You could add ten, fifteen, twenty VM’s up to
some limit all storing data in the shared
infrastructure in the Cloud on Tap environment.
Then, if you want to replicate
that or scale that even larger,
take the entire unit of VM’s to collectors,
the Cloud on Tap instant and provision
that – tens, dozens, hundreds of times
around the Cloud to scale out this process
of interacting with the real world.
And then the application logic can focus
on interacting with the devices,
the people, the process and things.
And you can take advantage of the capabilities
of the data fabric to move the data
from the Cloud back to a central point
for deeper analysis, longer term preservation.
The data fabric takes away a lot
of the complexity of development
of these large scale applications by providing
some of the core data management services.
Now, this isn’t necessarily a use case for real
time ad placement or instantaneous analysis,
but I believe that this massive numbers
of processes in the real world that operate
on hourly, daily, weekly cycles.
And, if we could simplify the process of taking
advantage of a massively scale or global Cloud
to apply to those processes, the potential
is enormous across a lot of industries.
Bob talked about the agricultural industry
and I think there’s revolutions going on
in agriculture taking advantage of IT.
So what if you were in the food production
business and you wanted to get a really
good understanding of your supply chain?
The fields that were going to crops that
you were dependent on – growing the crops
that you needed to produce your products
– you’d like to get an understanding
of what the yield was going to be.
You’d like to be able to collect data
from those fields and those farmers
either from agricultural drones
or from heavy farming equipment
that gathers temperature data,
moisture data, fertilizer consumption,
weather data for the environment
-- anything related to the health of the crops.
And this application might start pretty
small. There may only be a few fields that
have the data collection capabilities.
But then that may scale to hundreds of fields
within a region or a country over time.
You probably don’t want to collect
that frequently at the beginning.
When you put the seeds in the ground,
things don’t change that fast so
daily records are probably enough.
But as the crops get close to harvest,
you may want to collect multiple times a
day giving you an ongoing and continuous
insight into the yield of the environment,
which then could affect price, which
could affect revenues and profitability
for your overall business.
Being able to predict that environment with
a fairly straightforwardly developed application
could be very powerful to business.
This same idea can be applied in
the industrial environment.
To work with devices in the real world that
could be anything from a highly complex,
semi-autonomous, automated car production
robot of which there’s probably thousands
in the country all the way down
to coin-operated washers and dryers
in dormitories of which there
could be millions in the country.
You want to be able to talk to these devices.
Most of these devices now produce some
amount of telemetry on their own.
You want to be able to talk to these
devices at scale, gather the information,
maybe do some local processing in the Cloud
to compress some of the data down.
And then use the NetApp data fabric to replicate
it back for large scale geographic analysis of it
– getting insight into trends on utilization,
wear leveling of parts, exception reports
and, in the case of the coin washers,
how many coins have been dropped.
Lots of opportunities to use the Cloud
to interact with the real world
and change the way your
company talks to the world.
The same concepts would apply
to the information era,
tapping into things running on mobile
devices or internet of things or sensors,
being able to have applications which can
operate in the small and operate in the large
in a scalable form to build out
an application which combines
the best of the hyper-scale or Cloud with the
technologies that are
familiar to the enterprise.
So imagine the possibilities of being able
to combine in these Cloud models together.
Now the era of Cloud technology
– it’s well underway.
But the revolution and Cloud-centric IT and the
Hybrid Cloud operations is
really just beginning.
Your future -- as an IT professional -- it will
involve working with multiple Cloud providers
in a Hybrid Cloud woven together operationally
and wrapped in a fabric that
unifies your view of the data.
NetApp led the world over ten years
ago to a view of unified storage.
We’re leading the world to a unified view of data
in the Hybrid Cloud with the NetApp data fabric.
Now here at Insight as we’re at
the beginning of this journey
the dawn is always a great time to
separate the darkness from the light.
And our tech teams here today
learning about the Hybrid Cloud,
thinking what could be our best practices,
what are the use cases – and
that process will continue.
To our customers who’ve joined us here at Insight
I challenge you and I urge you to use your
imagine about what this makes possible for you.
How will the Hybrid Cloud be
adopted in your environment?
What could the NetApp data fabric?
What could it do for you? How could it simplify
your lives? How could it accelerate innovation?
How could it make things more efficient?
What could it let you do that
you couldn’t do before?
Because ahead of us all lies the freedom to
pursue ideas that were not feasible before.
And we are all here together at
the start of this journey.
We are all on the same team, and we want to
be the team that you can count on to win.
Thank you very much.
So, with that, I’d like to introduce
the president of NetApp, Rob Salmon.
Thanks, Jay -- that was outstanding.
I really appreciate it.