Showing posts with label devopsdays. Show all posts
Showing posts with label devopsdays. Show all posts

Monday, April 18, 2016

DevOpsDays: Empathy, Scaling, Docker, Dependencies and Secrets

Last week I attended DevOpsDays 2016 in Vancouver. I was impressed to see how strong the DevOps community has grown from the time that I attended my first DevOpsDays event in Mountain View in 2012. There were more than 350 attendees, all of them doing interesting and important work.

Here are the main themes that I followed at this conference:

Empathy – Humanizing Engineering and Ops

There was a strong thread running through the conference on the importance of the human side of engineering and operations, understanding and empathizing with people across the organization. There were two presentations specifically on empathy: one from an engineering perspective by Joyent’s Matthew Smillie, and another excellent presentation on the neuroscience of empathy by Dave Mangot at Librato, which explained how we are all built for empathy and that it is core to our survival. There was also a presentation on gender issues, and several breakout sessions on dealing with people issues and bringing new people into DevOps.

Another side to this was how we use tools to collaborate and build connections between people. More people are depending more on – and doing more with – chat systems like HipChat and Slack to do ChatOps. Using chat as a general interface to other tools, leveraging bots like Hubot to automatically trigger and guide actions, such as tracking releases and handling incidents.

In some organizations, standups are being replaced with Chatups, as people continue to find new ways to engage and connect with other people working remotely and inside and outside of teams.

Scaling DevOps

All kinds of organizations are dealing with scaling problems in DevOps.

Scaling their organizations. Dealing with DevOps at the extremes, at really large organizations and figuring out how to effectively do DevOps in small teams.

Scaling Continuous Delivery. Everyone is trying to push out more changes, faster and more often in order to reduce risk (by reducing the batch size of changes), increase engagement (for users and developers), and improve the quality of feedback. Some organizations are already reaching the point where they need to manage hundreds or thousands of pipelines, or optimize single pipelines shared by hundreds of engineers, building and shipping out changes (or newly baked containers) several times a day to many different environments.

A common story for CD as organizations scale up goes something like this:

  1. Start out building a CD capability in an ad hoc way, using Jenkins and adding some plugins and writing custom scripts. Keep going until it can’t keep up.
  2. Then buy and install a commercial enterprise CD toolset, transition over and run until it can’t keep up.
  3. Finally, build your own custom CD server and move your build and test fleet to the cloud and keep going until your finance department shouts at you.
Scaling testing. Coming up with effective strategies for test automation where it adds most value – in unit testing (at the bottom of the test pyramid), and end-to-end system testing (at the top of the pyramid). Deciding where to invest your time. Understanding the tools and how to use them. What kind of tests are worth writing, and worth maintaining.

Scaling architecture. Which means more and more experiments with microservices.

Docker, Docker, Docker

Docker is everywhere. In pilots. In development environments. In test environments especially. And more often now, in production. Working with Docker, problems with Docker, and questions about Docker came up in many presentations, break outs and hallway discussions.

Docker is creating new problems at the start and end of the CD pipeline.

First, it moves configuration management upfront into the build step. Every change to the application or change to the stack that it is built and runs on requires you to “bake a new cake” (Diogenes Rettori at Openshift) and build up and ship out a new container. This places heavy demands on your build environment. You need to find effective and efficient ways to manage all of the layers in your containers, caching dependencies and images to make builds run fast.

Docker is also presenting new challenges at the production end. How do you track and manage and monitor clusters of containers as the application scales out? Kubernetes seems to be the tool of choice here.

Depending on Dependencies

More attention is turning to builds and dependency management, managing third party and open source dependencies. Identifying, streamlining and securing these dependencies.

Not just your applications and their direct dependencies – but all of the nested dependencies in all of the layers below (the software that your software depends on, and the software that this software depends on, and so on and so on). Especially for teams working with heavy stacks like Java.

There was a lot of discussion on the importance of tracking dependencies and managing your own dependency repositories, using tools like Archiva, Artifactory or Nexus, and private Docker registries. And stripping back unnecessary dependencies to reduce the attack surface and run-time footprint of VMs and containers. One organization does this by continuously cutting down build dependencies and spinning up test environments in Vagrant until things break.

Docker introduces some new challenges, by making dependency management seem simpler and more convenient, and giving developers more control over application dependencies – which is good for them, but not always good for security:

  • Containers are too fat by default - they include generic platform dependencies that you don’t need and - if you leave this up to developers - developer tools that you don’t want to have in production.
  • Containers are shipped with all of the dependencies baked in. Which means that as containers are put together and shipped around, you need to keep track of what versions of what images were built with what versions of what dependencies and when, where they have been shipped to, and what vulnerabilities need to be fixed.
  • Docker makes it easy to pull down pre-built images from public registries. Which means it is also easy to pull images that are out of date or that could contain malware.
You need to find a way to manage these risks without getting in the way and slowing down delivery. Container security tools like Twistlock can scan for vulnerabilities, provide visibility into run-time security risks, and enforce policies.

Keeping Secrets Secret

Docker, CD tooling, automated configuration management tools like Chef and Puppet and Ansible and other automated tooling create another set of challenges for ops and security: how to keep the credentials, keys and other secrets that these tools need safe. Keeping them out of code and scripts, out of configuration files, and out of environment variables.

This needs to be handled through code reviews, access control, encryption, auditing, frequent key rotation, and by using a secrets manager like Hashicorp’s Vault.

Passion, Patterns and Problems

I met a lot of interesting, smart people at this conference. I experienced a lot of sincere commitment and passion, excitement and energy. I learned about some cool ideas, new tools to use and patterns to follow (or to avoid).

And new problems that need to be solved.

Friday, July 6, 2012

Does Devops have a Culture Problem?

At the Devopsdays conference in Mountain View, Spike Morelli led an Open Space discussion on the importance of culture. He was concerned that when people think and talk about devops they think and talk too much about tools and practices, and not enough about culture and collaboration and communication, not enough about getting people to work closely together and caring about working closely together – and that this is getting worse as more vendors tie themselves to devops.

At the Open Space we talked about the problem of defining culture and communicating it and other “soft and squishy” things. What was important and why. What got people excited about devops and how to communicate this and get it out to more people, and how to try to hold onto this in companies that aren’t agile and transparent.

Culture isn’t something that you build by talking about it

Culture is like quality. You don’t build culture, or quality, by talking about it. You build it by doing things, by acting, by making things happen and making things change, and reinforcing these actions patiently and continually over time.

It’s important to talk – but to talk about the right things. To tell good stories, stories about people and organizations making things better and how they did it. What they did that worked, and what they did that didn’t work – how they failed, and what they learned by failing and how they got through it, so that others can learn from them. This transparency and honesty is one of the things that makes the devops community so compelling and so convincing – organizations that compete against each other for customers and talent and funding openly share their operational failures and lessons learned, as well as sharing some of the technology that they used to build their success.

You need tools to make Devops work

Devops needs good tools to succeed. So does anything that involves working with software. Take Agile development. Agile is about “people over process and tools”. You can't get agile by putting in an Agile tool of some kind. Developers can, and often prefer to, use informal and low-tech methods, organize their work on a card wall for example. But I don’t know any successful agile development teams that don’t rely on a good continuous integration platform and automated testing tools at least. Without these tools, Agile development wouldn’t scale and Agile teams couldn’t deliver software successfully.

The same is even more true for devops. Tools like Puppet and Chef and cfengine, and statsd and Graphite and the different log management toolsets are all a necessary part of devops. Without tools like these devops can’t work. Tools can’t make change, but people can’t change the way that they work without the right tools.

Devops doesn’t have a culture problem – not yet

From what I can see, devops doesn’t have a culture problem – at least not yet. Everyone who has talked to me about devops (except for maybe a handful of vendors who are just looking for an angle), at this conference or at Velocity or at other conferences or meetups or in forums, all seem to share the same commitment to making operations better, and are all working earnestly and honestly with other people to make this happen.

I hear people talking about deployment and configuration and monitoring and metrics and self-service APIs and about automated operations testing frameworks, and about creating catalogs of operations architecture patterns. About tools and practices. But everyone is doing this in a serious and open and transparent and collaborative way. They are living the culture.

As Devops ideas catch on and eventually go mainstream – and I think it will happen – devops will change, just like Agile development has changed as it has gone mainstream. Agile in a big, bureaucratic enterprise with thousands of developers is heavier and more formal and not as much fun. But it is still agile. Developers are still able to deliver working software faster. They talk more with each other and with the customer to understand what they are working on and to solve problems and to find new ways to work better and faster. They focus more on getting things done, and less on paper pushing and process. This is a good thing.

As bigger companies learn more about devops and adopt it and adapt, it will become something different. As devops is taken up in different industries outside of online web services, with different tolerances for risk different constraints, it will change. And as time goes on and the devops community gets bigger and the people who are involved change and more people find more ways to make more money from devops, devops will become more about training and certification and coaching and consulting and more about commercial tools or about making money from open source tools.

Devops will change – not always for the better. That will suck. But at its core I think devops will still be about getting dev and ops together to solve operational problems, about getting feedback and responding to change, about making operations simpler and more transparent. And that’s going to be a good thing.

Site Meter