Public “Office Hours” (2019-11-27)

Erik OstermanOffice Hours

Here's the recording from our “Office Hours” session on 2019-11-27.

We hold public “Office Hours” every Wednesday at 11:30am PST to answer questions on all things DevOps/Terraform/Kubernetes/CICD related.

These “lunch & learn” style sessions are totally free and really just an opportunity to talk shop, ask questions and get answers.

Register here:

Basically, these sessions are an opportunity to get a free weekly consultation with Cloud Posse where you can literally “ask me anything” (AMA). Since we're all engineers, this also helps us better understand the challenges our users have so we can better focus on solving the real problems you have and address the problems/gaps in our tools.

Machine Generated Transcript

Let's get the show started.

Welcome to Office hours.

It's Thursday or November no Wednesday November 28, 2019 my name is Eric Osterman and I'll be leading the conversation.

I'm the CEO and founder of cloud coffee where a DevOps accelerator we help startups under infrastructure in record time by building it for you and then showing you the ropes.

For those of you who are new to the call the format of this call is very informal.

My goal is to get your questions answered.

Feel free to unleash yourself at any time if you want to jump in and participate.

Will we host these calls.

Every week we automatically post a video recording of the session to the office hours channel as well as follow up with an email.

So you can share with your team.

If you want to share something private just ask.

And we can temporarily temporarily suspend the recording.

With that said, let's kick this off.

I have a few talking points that came up this week that I'd like to bring to everyone's attention.

But we'll also get to answering your questions too.

So if there's ever a lull in the cold conversation we'll get into the new health file provider for Terraform but that moo moo Shu came up with.

And by reviewing his comments on that.

I discovered the Terraform shell provider, which is pretty rad.

It's the ultimate escape hatch to plugging into terraforming without being a Terraform developer like low level developer and always can wrap things up, get up actions.

If there's nothing else to cover.

All right.

I was just talking with Zach.

Zach is a new member of the community.

He's been helping submit a bunch of pictures on our packages.

Thank you for those.

And he was just sharing some of the stuff he was doing on some big data pipeline.

Stuff like that.

Zach hey.

Thank you.

It's a bunch of PR what a couple.

Sorry for the back and forth.

We've been struggling with that.

The packages repo is a little bit of a testbed for CCD for us in the testing I'll get action switching downloadable code credit and a bunch of other little experiments and stuff.

So we.

Yeah, we've had some issues with stability on the pipelines and the packages refilling impacted your ability to contribute there.

I did you a push to fix the latest thing you were working on right.

With the tools I did.

I did that.

Hopefully it will work.

Now OK.

Well, I haven't.

I mean, actually, I need to run the test match to see if that works.

Now Hey Dale welcome Maddie.

Motion cool.

So let's see.

So are any of you using help file today.

Yeah All right.

Cool cool.

And then I take you guys are also using Terraform right.

So then this is kind of exciting.

I don't know if you saw the news.

But I've been I've been bugging membership for like the better part of last year that we need to Terraform provider.

So that we can integrate it with the full cycle of everything else going on.

And I'm pretty excited that you came out with this.

The benefit of using this is that you can now pass values like this and have interpretations or reference outputs of other modules.

So it will streamline your internal automation of bringing up a cluster from scratch that depends on integration touch points provisioned by other Terraform modules.

What's available.

What was that.

This is available now.

Yeah does this available now.

He's not.

I just opened up an issue here.

Published binary.

He's pretty good about that.

He'll he'll probably do that in the next day or so.

But right now, there's no published binary.

So you gotta go get it yourself and build and install that.

And since it's not an official provider you know you need to install them in the plugins directory like this.

That said, I'm thinking of distributing a package for it in our alpine repository.

And then I believe I haven't tried this actually, I believe we can then set this environment variable.

So then you can have a shared path on the file system where you have all those plugins if anyone can correct me on this.

I'll be great.

If that's not the case.

All right.

I have a chance to test out the next few weeks.

I've been able to tie-in my Terraform with the home piles that I've been set up for my clients.

So yeah Yeah.

So he is well we're testing it this week Andre on my team is working on that right now.

We're going to be using it together with temporary cloud.

Well, one of the things he does is he credits.

This provider.

Dude I've been looking for one like this.

There's a bunch of them out there.

There's like Terraform provider external or something by someone else.

But it hasn't been updated in like two years.

So here's some here's a provider that is being maintained it commits as of just a few days ago here.

So that's cool.

And if you look at what this provider does it exposes the underlying lifecycle hooks in Terraform so create read all the CRUD create, read, update, and delete.

And that's cool because now you can just tie that it like if you wanted to.

Now use Terraform around the cops provider look the cops provided the cops are quite cool.

You can now do that just by scripting that here.

Sounds like something maybe you'd be interested in Dale I don't know.

Certainly Pablo welcome.

So I submitted a PR I think a week or two ago in the user end of it.

I am.

Go shoot.

OK, let me see here.

So take our thoughts.

I'm trying to leverage the tag in the attributes to do a mock UPS to play through all the scenarios I saw in the documentation where teams were actually tagged like members of the team to ratchet tag the resources where all the time, and you could actually interact with the resources that your tag for.

So like you can sort of step like this like, oh, I rolled by brightening type scenario with resources, which is called a bar.

Yeah, no well I'm familiar with a not conceptually, but yes yes.

So I was watching all of the talks from I think it was an old event, and they were walking through using attribute based arms control control access and the channel you walk through with a scenario where different teams were tied like a zombie and unicorn and what resources that they could actually terminate or actually interact with as well.

So you wouldn't have team members that truly bring those tools with resources, that kind of thing and all that.

Did you say you had a PR for that or.

Yeah, I think it's awesome PR.

Well, it wouldn't be coming from me directly would come from the lunar ops.

Gotcha I thought you said it.

I am a user, I think it was.

Yeah Yeah, I didn't see the open PR there.

If so I'll have or unless we maybe merger.

I'll take it again.

Oh, yeah.

This was already merged.

OK Yeah.

Thank you.

Thank you for that.

Andre reviewed it and used the go to guy on my team for Terraform stuff.

So the fact that he didn't tear it up.

That's a good sign.

Perfect Yeah.

Let's hear a release tag.

Yeah Any cutter releases zeroed out to the zero period has your changes.

Thank you.

Thank you.

Thank you.

Cool Looks like Pablo dropped off Nadia are you just ordering today.

Any questions.

Yeah, no questions.

Just following along.

I don't.

We're not using Cuba that is right now we're using efca.

Or just kind of playing around with it for the most part.

OK though with Terraform.

Yeah, we're using or I say I'm using your guide modules.

Oh, awesome.

Good to know.

Good to know.

Yeah using the we've updated them all these yes modules.

Now support each CO2 for Terraform zeroed out as well.

Are you on the latest.

I have.

I have not integrated your changes in yet.

OK I actually upgrade them on my own.

Like I don't know a month or two ago.

Yeah, it's not used in production or anything right now at all.

I'll update yours again.

Before long we've had a terror test too.

So they're all tested every commit.

It's also Yeah, you guys are curious about terror tests and haven't really dabbled with it.

It is pretty sweet.

I've been pushing it off for a while to be honest.

And I want to just use bats or something simple like that.

But with terror test it does.

It does work really well.

So we have a source directory.

The test slash source is where we put the example for Terry test.

We named are our convention is we name the file the test name.

After the example, if you go under our examples here.

And then complete here's the one that we're testing and then fixtures here's what we test our modules with.

So what's the thing that we do.

That's nice is we distribute and make file here, which makes it easy to use.

Go in the current working directory without having setup the whole scaffolding of like the directory structure for go.

We just hack it together with some simple links here.

So you can just run make test and it just works because it's done on the c.I. or like a pre-commitment.

We don't use many pre commit hooks right now mostly because like I've been like they're unenforceable unless you actually add them to the CIA.

But I I'm after Andrew's demo last week.

I'm more curious about it.

Specifically I want to add a pre commit GitHub action to a lot of our projects.

And then add that there.

So yeah, it's in the backlog.

I do want to do that.

But no not right now.

So yeah period for us.

We run it right now.

And a code fresh pipeline test that gamble.

Basically Yeah, we just we call on the repo.

We initialize the build harness and our test harness.

These are built harnesses what we use everywhere.

We just standardize how we interact with our projects.

And then there's.

And then here we run the tests in parallel, we.

We do the lifting the.

We run the that that's automation tests.

Oh no here here are the bats automation test that is short for the bash.

Automated testing system.

I think so we have a bunch of stock tests that we distribute in our test harness for that.

And then we have the terror test step right here where we just call make seat for this test source directory nothing to look at.

But yeah and I think you think or what was that specific do anything specific to them apart from a test or was it.

Tony Jeremy.

Yeah, I didn't even open it up.

Let me.

So we're not doing like variable.

So look, there's like perennial principal 80-20 right.

You can span.

You can get the maximum amount of benefit by it maybe doing 80% of the work.

But to get that the last 20% will take 80% more effort.

So we do we do the minimum in our experience, just bringing up a module and bringing it down again catches 99 percent of all the issues you're going to have testing the functionality of what you've done.

Yes that's valuable.

But it's going to take you 80% or effort to implement.

So in a lot of our tests we're doing kind of the minimum effort where we instantiate the module with its example that we publish using our fixtures and then we and then we destroy it afterwards.

We evaluate the outputs of that module to ensure that they match what we expected.

But we're not doing really elaborate testing of functionality.

The exception to that is our CAS module our UK JMS module, we're doing a lot more on that.

Andre spent some more time to ensure that we wait until loops until the nodes join, and that everything is health healthy.

What's so cool about terror tests being in go.

As you can leverage the whole ecosystem of go modules for that.

So Andre implemented a event Handler here to intercept when the notes join.

So one of the most common problems.

People were reporting to us were they were having problems with nodes.

We're not joining the cluster.

So by implementing this test.

Here we ensure that no, no change breaks that ability for notes to join the cluster.

And we're just using the built in support here for not built in.

But we're using them.

I forget that it was Andre wrote.

I'm just riffing here.

Yeah, so he's using all the k the Kubernetes go libraries to make that interaction easier.

That's my dog.

Mean his spoken.

Yeah, this is if this if the turtle stuff is interesting.

Let me know not in the office hours channel and I can have Andre prepare something for future session that we just.

Yes Yes it's cool.

All right.

I'll make a note of that.

So with things like terror tests like where do you draw the line.

Let me on my dog's bone.

I have to get my dog's bone.

It's driving me crazy.

Once it is there early you must be going through it just right back.

Where do you draw the line on.

I'm testing whether or not like basically the provider and Terraform works vs. whether my business logic works like I find that testing infrastructure.

A lot of times you just retesting stuff that should already be upstream testing.

Yeah, exactly.

You don't want to test.

You don't want to like all the Terraform providers had extensive testing.

So you don't need to test that kind of stuff.

So it's more.

That's why I think testing the value about puts is very valuable.

And then going back to the parade of principle on applied to testing right.

If you're going to spend 80% more effort trying to test for it is it worth it for you right now to do that 199 percent of the problems are caught just by bringing something up and down.

That's how I draw the line.

Basically, if we're going to have to spend three days developing a test for this, then we're just not right now because we got too much.

So like business logic.

Let's say Yeah.

I mean, it's a difficult decision like let's say you have some business logic to deploy in $3 with a citizen and you have some cause rules that you need there.

You could probably test that free easily.

And that the course rules are effective for you by just simply provisioning that module.

And then initiating and HTTP get against that a resource and checking that the headers are good.

And that would not be a big project that would be just like, an hour's worth of effort.

Yeah, it makes sense as just drawing those lines as often, especially I've noticed for you know more junior developers who are just getting into integration testing.

I see that struggle with that a lot.

Yeah, you end up with a lot of tests that just duplicate upstream effort or don't provide any value for the time spent on it.

Yeah, I. Yeah, I can't say anything that doesn't sound just kind of obvious to you or I.

The test is the test when I say test here.

The thought process is really test what is absolutely unique to what you're implementing.

But not that is unique to that just the underlying primitive or that resource.

So the fact that you need a cause record for a certain kind of request that's unique to your business that that's bona fide for testing the fact that the bucket is created to that you don't need to test the bucket was created.

Bucket was created of Terraform says it was created.

Exactly where this proved incredibly useful was when we first started porting our modules to 0 to 12.

We've been I think we got, like we got like 50 60% of our modules, and all the most important modules are now zero about 12.

It's just like some little remnants here there that are not.

But when we ported our no label module over there were just it was a pretty complicated module in terms of all the replacements and coalescing and formatting and stuff that we were doing with it.

So having tests on the outputs for that to make sure that we didn't introduce a regression was critical and it actually helped me catch a bunch of things along the way.

So here's an example.

We're just testing that the module is producing the kinds of outputs that we would have expected, given the kinds of inputs that we're giving it.

And those inputs are in the fixtures cool.

Any other interesting announcements you guys have seen.

Cool projects lately.

Bookmarked Yeah.

We'll make a run for it.

I don't really want to hear more.

But was it real.

I. My rancher.

Yeah, it's like I'd get up.

Solution is not recommended for production just yet, but it looks fascinating.

Micro pass.

I didn't miss this.

I didn't see this.

Just keep clicking in the wrong places.

Let's see.

So Rio is a micro past that can be layered on top of any standard copper needs cluster consisting of a few custom resources and apply to enhance the user experience.

Users can easily deploy services to communities and automatically get continuous delivery.

DNS HP is routing monitoring auto scan canary deployments get triggered build.

Oh, very cool.

So if this is a pretty minimally deployed like my problem with a lot of these passes is you've got to deploy so much infrastructure just to support the pass.

It doesn't make me excited.

So I'm wondering how many I'd love to see an architecture diagram for this.

OK So it's built on Linkerd.

Prometheus tacked on.

Let's encrypt.

OK That's very cool.

Yeah, I do want to be a fun little PSC too.

I get I'm just going to share this to the officers general.

So related to this.

It reminds me of something else.

I saw.

It's by we've plucked not flux flux has gotten a lot of attention lately because of the collaboration now between Argo and flux and is it whose backing it is.

Red hats.

That's backing it or I'm Irish where there's somebody spearheading this or is my understanding.

But no it works Flagler.

Have you guys seen this.

Whoops Yeah, I clicked on the wrong one.

So everyone talks about doing cannery rollouts and blue green rollouts.

But when you get to the science fine doing it.

There's a little bit more to it.

And the right way to do it, of course, is a staggered rollout which is integrated into your monitoring system.

So it can automatically progress based on your monitoring solution or your metrics.

And that's basically what Flagler is doing.

So it's a is it a controller for basically that process.

So as you see it ties into the Cuban data APIs to see the progress of your rollouts Prometheus for your metrics and then it can progressively increase the traffic to your services.

And that's cooler this year as well.

I was looking at this one as well.

Got a bunch of things that is interesting is if you do actually kick kick the tires on it or do it being a serious thing do let me know.

Be pretty cool.

Also Also this goes for anyone here.

If you guys are working on anything neat.

And you want to do like a show and tell them we're really cool.

So Flagler and Rio All right.

It sure flag officers here.

Have you guys heard of a man.

Yes Yes.

Yeah, I don't know how realistic that is.

As far as using it in a W because it's supposed to be almost the drop in replacement for Docker.

Yeah, I mean.

OK, I guess it really depends on what perspective you approach this stuff from.

Man who comes up time and time again, when you're looking at.

How How do you manage CI/CD providers allow building a Docker containers without giving away the crown jewels.

So basically, they wrapped around pod man for doing those Docker builds in a secure way.

But Yeah, I haven't tackled it.

Not my kind of the level of say, there are all these micro optimizations just like this that are really awesome.

And I think are maybe worth pursuing, especially if it's just like for your closed ecosystem within your company.

But we have club costs of what we're really trying to do is provide the roadmap for integration of all of these components and systems.

And as much as possible.

We wanted to first thought leadership to tools and technologies that we use.

So for example, for Kubernetes today we've been mostly using cops.

We do have the guest here for modules as well.

But I bring this up.

Because if cops decide they want to use pod man that's awesome.

We'll support it.

But if cops doesn't support it out of the box.

We're not going to kind of break compatibility with them just to support it.

And introduce just more and more stuff to manage our earth.

Are you guys doing a lot of prayer.

Are any of you running highly threaded apps on cougar niches right now.

And using Prometheus.

Yeah not yet.

I actually do want to reintroduce using Prometheus within or augers or kind of to cleanup or deployment or natural deployment pipeline as well.

And do actual cooking area with it.

But nothing yet.

There are a few products coming up for next year that may actually require that, but not at present.

OK Yeah.

What do you ask.

So then there's recently to fix that has been merged into the alpha channel of coral as it relates to basically how you throttling is implemented in the Linux kernel and a big problem in how you can monitor for it on under Kubernetes, and the problem, is that you get a lot of basically erroneous CPU throttling alerts.

If you're right running highly threaded apps under communities and just spreading the word on this fix.

That's the core of us alpha channel right now article unfortunately, it doesn't look like it's going to hit cops like or Debian cops for Jadeveon anytime soon.

Welcome to Ms Wilson garden I'm always alert.

What was it.

I have a request.

All right next year.

I'm hoping you actually do another session again on geodesic and your actual workflow for a bit more of, I guess, a debate.

I like what you actually showed us.

The last time I took my effort to perform it greatly.

I think that there's a lot more that you guys actually do, which I think is really cool process that we just want.

I want to peek behind the curtain.

Pretty much.


No I'd be right.

They'd be pretty cool.

Also something even like Alex might be also interested in showing his work clothes and stuff.

He's using geodesic weekly, if not daily workflows are efficient.

I'm sure you guys have a lot more tricks and tips.

But I'm happy to do an ad hoc thing.

Yeah also there's a great one.

I should get Jeremy on the line for that one.

Jeremy has taken it to the n-th degree.

What he does inside of geodesic stuff that stuff that I haven't even kick the tires on the journey he's been doing inside of geodesic.

Basically there's a bunch of extensions that he's added.

So you can add customizations for your experience and you'd like that don't necessarily affect other developers.

This is kind of empty and antithetical to our original plan, which was that everyone should have identity in identical environment.

But the reality was that different developers have different shortcuts different aliases different ways of working that I gave up trying to fight for the one.

The distinction there there's the tools and like that portion of the environment should all be the same.

Yeah, but if somebody uses them and somebody uses emacs like, who cares right.

I agree.

I think that's the distinction like that.

There's a line there where it doesn't matter anymore.

Yeah So actually this remind just jog my memory.

It's a total non sequitur here.

One of the things that we've been seeing a lot more of a lot more problems are with is base and you've been experiencing this Alex as well is basically fluently directly integrated with Elasticsearch an overwhelming Elasticsearch Elasticsearch is just very dainty sensitive finicky search product.

It doesn't like eating too fast or it will just complain.

So a couple interesting things there.

We haven't implemented this yet, but I was just doing some research and bringing this up because one of the announcements was just back in October.

So this is pretty new.

So I think the better architecture for us to implement on this is going to be one using the fluid fluid to Kansas plug-in and that that's existed for a while.

So using Kansas as a buffer until I ask search and or Jacqui rather I don't even have an Elasticsearch.

Eat Yeah.

Exactly controlling the controlling the ingestion.

So that's one thing.

But once it's in Genesis a can buffer it.

Like you said B you can send it to multiple destinations or sinks one of those sinks can be s three.

So now you can have long term storage 3, which can be queried with what is it Athena or something.

And then you can also have your real time search with Elasticsearch, and a shorter retention period because you only need maybe the last week or something like that.

So this was the one click again here for it.

This is for fluid deeds can ease this.

But then can eases and it's already supported with Terraform is to shift the logs to Elasticsearch.

So an Elasticsearch destination.

So really does look easy enough right.

And then related to that it was just the announcement Yeah, I've used it on over the years and firehose outside of the thing outside of Kubernetes to deliver logs to plastic surgery.

It works fine.

I use it.

I used the last year from a bunch of Windows boxes using the ADA as agent logs agent works great.

That's cool.

Yeah, I mean, I think this could have been implemented outside of this.

Like you said I guess it's just they just made it easier, more turnkey now.

Yeah, I used to have to back when I was doing it last year used to have to munch it through a lambda.

You can see pieces for the lambda to munch the logs into whatever format you needed for a massive search.

You do all that that Make sense.

Cool Any other cool projects you guys came across recently.

I'm going to go head over to my own stars.

See what I found.

If there's anything interesting.

Alex anything.

Sorry I missed that I was responding in Slack.

What's a what was a question.

Oh, OK.

No, I come across any cool little projects.

I mean, I have a lot of cool little projects in my head.

I actually play with any of them.

Yeah Prometheus was as you know, was bought for a couple of weeks.

And I finally got it fixed up and found out there's a whole bunch of alerts and problems that have been going on.

This sub describes them through those as a little fragility there.

I've noticed with like Prometheus can still collect metrics even if it can't display them or you can't use them because that portion of the stack isn't working.

But that doesn't necessarily cause the watchdog that we set up an alert manager to go off because metrics are still coming in.

And so you don't actually know that stuff is broken until every user reports it like refining slumlords refinement of this dashboard.

But our manager also is having the same problems.

But like your watchdog member fails.

So you don't know says one of the things I'm just here for us.

We just we're spread thin.

So we don't have folks that are focusing on that daily.

So we just didn't notice it for like a week.

Yeah Yeah.

And then when you need it.

It's actually down.

That's frustrating.

Yeah Yeah.

Those things we can roll out and in our next engagement will.

Yeah every engagement.

We do with the customer.

We basically extend the alerts.

We extend the things we monitor.

So that might be one of those areas.

It's looking like it's going to be gas.

So that's where we're probably going to be extending our support for it seems pretty as it's maturing as a product.

It seems pretty popular and now we manage no bulls.

That's pretty awesome.

However, with yet with a spot in the ocean, which we briefly discussed last week.

Spot in the ocean is basically manage no pools for Kubernetes by spots.

And it also handles the complex calculus of scheduling the right kinds of nodes, which you can so that begs the question, is it worth a 20% of charge from Castro's cast to a decent enough managed no pool solution that spot in like where it whereas their value prop at that point.

Yeah Yeah.

So I guess the answer is it depends.

And what you can do.

Well, so first of all chaos manage nodes don't support no tools at all today.

So it was just released whereas a week ago.

So I guess that's probably coming very soon.

But it.

What I mean by it depends is it depends on your workloads and how consistent they are basically what kinds of instances or nodes you need.

Will you get by with a very with a naive kind of spot fleet configuration or do you need something more complicated.

What I what I think is so cool with respondents thing is how it sees how much memory you need.

And then it finds the right kind of instance, for the cheapest price that satisfies that requirement.

And I think that's really just very cool.

So it's not just it's not it's not just using spot.

It's also right sizing based on the instance tight.

And for that, you need a scheduler encumbrances and for that Amazon needs to develop a scheduler for Kubernetes that could do that.

Yeah, it seems pretty slick.

I want to look at it more when I have any kind of time.

Also let's see here.

So I think I haven't.

So it's been working on this.

And she just implemented it gets weird.

You a search and repo names.

That's what you meant to do.

Yeah, this is going to see since I didn't work on.

I don't remember.

Yeah, this here.

So I should publish this to the registry.

But so so we just developed this module here, which extracts the launch data configuration for cops so that you can use that together with our home file for spot insert to set the parameters that are needed.

So there's a bunch of parameters for spotting the ocean that come from your Kubernetes cluster and in order to automate that end to end.

I guess we haven't merged yet.

Since it's a work in progress.

Yeah, here.

So that other module is to get close to respond.

I don't know.

It's a demo for you for them.

I don't know where those launch I would have expected to see the launch configurations in here somewhere.

Outrageously badly beaten.

The caps stuff somewhere.

But either way.

So we actually cut bait on that.

So spot supports cops natively so to natively so to say like they have an explicit documentation site for setting up cops or tried that.

And he was.

Get it.

Cops that control the cops controller was caught something.

So we couldn't figure it out.

And even if you implemented that there were large parts of it that were not fully automated that were click ops e click ish to it.

So that's why we implemented.

So spot instead has a Terraform provider.

For ocean.

And that's what we implemented instead.

So Yeah, that makes sense for the pickups but she made us.

Yeah So this is what we implemented.

Yeah, this is I guess just a teaser.

Igor is going to present on the ThoughtWorks technology radar probably next week.

We knew that today was going to be pretty quiet given that Thanksgiving week.

So we're going to talk about that.

And then we'll probably do a better presentation on spots as well.

Sure thing.

Had a thing like it have actions have you.

Did I miss that I came in late.

This is just a place holder.

Yes, I can talk about ghetto actions all day long because I've been doing a lot with them.

And I think there are a lot of fun.

Yeah, I just I just need to actually write my first one to understand how it works Yeah, I think it's a great place to start would be like cloud costly packages under GitHub and here.

Here I implement three actions that we're rolling out for us.

Here are the actions that are workflows.


Here are the actions we're implementing for all of our repositories across the board.

For now these are related a little bit to the fact that we do a lot of open source matters.

Some of this for us.

But one thing is auto assigned.

I really dig this PR.

So on it it references a another.

Basically, you can build a library very effectively for your company with all of your actions and then reference them very tersely just like this.

We use them.

So this year will auto assign the PR when it's opened this year will automatically greet new users who open an issue or pull request with a comment.

Inviting them to join our slack community.

This year is the one I'm getting the most benefit out of.

Like all the other ones are nice, but this is the best.

I think it's the auto label or immediate payoff.

It's great because when you go to pull requests and you can look at what we've had going on here.

So every night at midnight GMT we have a process that kicks off.

That looks to see if there are any new packages that need to be updating.

This is maybe something free for use act that drew you in as pretty cool.

So because it's a model repo are packages the labels make it very clear to see yes, we updated packages.

But what was updated.

OK So here we updated cops and teleport.

Here we updated helmet file on HSOPS.

Here we updated code fresh panda dog.

And here.

It's get leaks and help file.

If there's just a coincidence that it's 2 per day really weird phenomena.

Other times it could be dozens of packages like this one thing that would really help me is if you did the same thing for like your home files repository like I want to.

Yes, I guess I totally want to do that.

I could just pin them like when I want to use the helm file.

I see a master.

I could just pin it to the newest release.

But then I lose that context.

Why am I upgrading that specific file out of your repository.

So I prefer to put it to the newest release of that helm file what actually finding that number is kind of a pain in butt.

I mean, I think you can eventually get down to a comet.

Last I see what you're saying.

What is the latest release of related to that writer.

Yeah Yeah Yeah.

Well, as always for the lazy release that contains the newest Prometheus operator or the first release that contains the newest operator.

I should say.

So if your auto label him when like a release is made.

Hey these are the.

Because you're trying to have a human process.

Remember to put like something in brackets right.

Yeah, just click a label that says Prometheus operator and see all the times or release included changes to Prometheus operate.

That allows me to have a lot more quickly go through and say, OK, this is upgraded for this.

And this has the bug fix.

I need.

That's going to there.

Yeah, that's really good.

That's a good tip.

The only reason it's not here is I haven't gotten around to rolling it out yet, but I'm going to roll that out.

That's a good tip.

That's one of those things where I guess you just have to kind of do something where you pass the file names in the changed files and make those into labels somehow something.

Yeah So we've generalized the pattern because we have.

But while we practice poly ripple across the board.

We do have a few minor repos for sure.

So we kind of generalize the pattern for that under packages you'll see how we're doing it in the makefile.

What we do is we have a target here named after the file.

One thing that's often forgotten these days because make is used so much for phony targets targets that don't correspond to files is that makes origin is actually for file manipulation or like using file modification times to trigger build processes.

So here, this is saying if this file does it doesn't exist, then I go ahead and generate it now.

So that's a yes.

This is generating the labels for that by running this target here.

And we're just the extracting we iterate over all of the packages defined here.

One thing I know I just learned.

And make is how you can define locally scoped environment variables like this.

So I would say that's the power to make a trip.

Trick and then iterating over those and stripping the slash and putting a star star.

So the outcome of running that target.

OK So then and then in here other read me dex we just trigger that as a dependency every time we make it.

And under GitHub auto label here.

You can see the ones that we generated them from that process.

So I'm just going to do co-generation like that for old files like you suggest they don't look like one that's there.

Yeah So a public service announcement, then if you were relying on our packages to be stable because they weren't updated very frequently being a false promise of stability that's no longer the case.

Now they are updated almost well within 24 hours of a new release that is true for everything except for these packages here where we're pinning to a specific major release or sorry specific major minor release these here are not yet updated automatically.

But everything else that doesn't have a version number like this is updated automatically.

So these are all updated automatically updated automatic with et cetera.

All right then let's see.

Any last thoughts here.

Just quick.

Go ahead.

Yeah, I was to ask about Terraform like 13 or how long you guys do that.

Does anybody talk about how long you Terraform what was going to be around or zero not 12.

I should say.

That's a good question.

Anybody have any Intel on that.

I don't know.

My understanding is any jump from zero told to zero 13 is going to be minor by comparison.

The 11 0 12 thing that's like got.

Yeah, that was painful that's right.

And the fashion being a major patch.

No, but Yeah.

But you know.

But September pre 1.0 is a different construct then.

Yeah Yeah.

Yeah So I know that because cloud policy we exploit that as well.

So most of our work is all pre 1.0, which means that the interface is subject to change.

And it's kind of hard to argue that our stuff can be not pre 1.0 when Terraform itself is not 1.0 yet, but you should know that you elicited kind of a cold sweat when you said zero dark 30.

No, it was not fun updating all of our modules.

So all right.

So all right, everyone.

Looks like we've reached the end of the hour here.

This about wraps things up.

Thanks again for sharing.

I always learn a lot from these calls.

It keeps me up on my tip toes.

A recording of this call is going to be posted in office hours, in a little bit.

And I'll see you guys next week same time, same place.

All right.

Thanks Thanks.

Public “Office Hours” (2019-11-20)

Erik OstermanOffice Hours

Here's the recording from our “Office Hours” session on 2019-11-20.

We hold public “Office Hours” every Wednesday at 11:30am PST to answer questions on all things DevOps/Terraform/Kubernetes/CICD related.

These “lunch & learn” style sessions are totally free and really just an opportunity to talk shop, ask questions and get answers.

Register here:

Basically, these sessions are an opportunity to get a free weekly consultation with Cloud Posse where you can literally “ask me anything” (AMA). Since we're all engineers, this also helps us better understand the challenges our users have so we can better focus on solving the real problems you have and address the problems/gaps in our tools.

Machine Generated Transcript

Welcome to Office hours.

It's November 20th.

Today my name's Eric Osterman.

I'm going to be leading the conversation.

I'm the CEO and founder of cloud posse where DevOps accelerator we help startups own their infrastructure in record time by building it for you and then showing you the ropes and how to do it.

For those of you who are new to the call the format of this is very informal.

My goal is to get your questions and answer them the best we can.

Feel free to unleash yourself at any time if you want to jump in and participate.

We host these calls every week, we'll automatically post a video of this recording to the office hours channel and I'll follow up with an email as well.

So you can share that around with your team.

If you want to share something that's private.

Just ask.

And we can temporarily temporarily support a suspend the recording and do that.

So with that said, let's kick this off.

So here are a couple of key points, we can cover today.

First and foremost, though, I want to get your answers question.

But if we encounter some silence.

This is what we can discuss.

Namely, there's been some work by AWS in partnership with terror former hashi corp. on supporting landing zones and support.

Unfortunately, this is only an enterprise feature right now.

Mike Rowe joined us kindly has kicked the tires of AWS as a control tower, and his experience.

Well, let him share what that was like.

And then we also have a couple announcements AWS or HBO announced managed node pools for UK s and Terraform already supports it.

So we'll cover that and time permitting, we can always go into get of actions or Terraform cloud, things like that.

All right.

So I'm going to hand this over.

Anybody have questions.

First of all, that we should get answered right.

Let me check what's going on in Slack channel 8 and have them in front of me.

Office hours.

All right.

So no questions there.

All right.

Well, then Mike, do you want to talk about your experience with e.w. its control tower because this is the first I've really heard of it from a firsthand account of user.

So I think others would be really interested.

First would you want to start with an intro of what it is.

Otherwise, I can do that.

Yeah point if you want to just do a general intro of what it is especially with how you would see it you know in terms of working with the products that you guys have put together again and I'll actually demo it and give you guys kind of an overview of how it runs.

And then I'll probably curse professionally for a few months and then hopefully that'll close everything out.

So all right really cool self for those that you don't know clown posse.

We do a lot of Terraform and we also have a project on our GitHub called reference stash architectures reference architectures and this is what we've been using to support our own consulting when we onboarding new customer basically lay out the US account foundation and do it in this opinionated way where we use one GitHub repository per AWS account.

So basically treating IBS accounts almost as an application in and of itself.

This process is not perfect right now.

Anybody who's tried it has probably run into some challenges because there's a lot of edge cases.

There's a lot of rough edges when you're working with AWS accounts namely they're not like first class ID or for automation you can't do many things that you want to be able to do like you want to be testing these things for the ICD like you'd want to be able to destroy an account and bring it back up again.

Well, you can't destroy an account and Terraform unless you first log in with a whole bunch of click UPS and accept the terms and conditions, and all this other stuff.

So that's meant that we can't automate basically of US accounts to the extent that we would like.

Well, good news is Amazon came out with a couple things.

One is this concept of landing zones, which are like eight of US accounts that are pre provision with a certain configurations and settings and pretty turnkey and the other is their control tower, which is basically a product designed to provision landing zones as best as I understand it.

This sounds really cool on the surface because hey, this is not something that we technically want to continue supporting like if Amazon just made it easier to create AWS account architectures like a vending machine that be awesome.

So I've been holding out hope that we could use this control tower.

Well, Mike here has reached out to us because he was starting down the path with a new company two sets to provision the reference architectures and I said, hey, wait before you do that.

Why didn't you check out control tower.

And let me know what you think about it.

I thought I was doing him a favor.

So we'll see what he actually thought about that now at least I was willing to jump on the call.

So it doesn't show.

I'm totally pissed off at Erik let me down, down this route.

But So I had a clean 8 abuse account.

And I thought, you know what.

I needed.

I wanted to provision individual accounts for each of our engineers to have their own sandbox and keep it totally isolated from everybody else.

You know it sounded like the ideal use case you know we're using this.

So am I able.

Can I share my screen.

Eric Yeah.

Let me stop sharing.

Go ahead.

All right.

So let me see.

Just a reminder that we are recording this.

Make sure you don't have any secrets on that Yeah I think I'm on Watch that.

So Yeah.

So let's say interesting perspective.

The fingers on the keyboard.

You know.

I don't know why they put my the doesn't you know that the video camera down at the bottom right at the base of the screen it makes no sense.

So on it.

All right.

Are they gone.

What do you guys see and are you seeing my eye to see I see your smug morgana.

All right.

So we're can I get this to this.

I thought I was sharing the screen, maybe I'm not.

There we go or you get my screen now.

Yeah, I see it.

I see some.

I see the single sign on with eight of just counting the cast below their right.

So the first thing to keep in mind is once you install control tower it goes through and provisions and all of the account log account automatically for you.

So you know one of the things that I actually posted him.

And one of our select channels is do we still need to go and increase your account limit.

And the answer right away is yes because the minute you install control center your control tower, and provision it takes away two of your four accounts.

So you're you go through with a clean account.

And you can at most create two sub accounts.

But So you've definitely got to start with a increase in your limit out of the chute to do anything worthwhile and just add some context there for those who aren't running like multi account AWS out of the box is saying basically you sign up for AWS.

They give you four accounts.

And if you want to provision more.

And these days like we typically provisioned like seven to nine accounts out of the box.

Well, you just got to open up a support ticket and say, you need more accounts and add a little justification of why that is kind of ironic though, because they built this thing out of his control tower for the perfect checklist of like vending accounts and AWS is crippled by out of the box to even use it.

So kind of weird.

Yeah And what make.

And another thing I found out which they don't document anywhere is there's like a magical number of 10 accounts that they could do quickly.

And if you want to go over 10, you've got to do a special justification for us.

So if you're experimenting if you ask for 10.

You can get that easy.

I asked for 15 and managed to get 15 although it took several back and forth.

OK So.

So it uses a W a single sign on in order to log into all your accounts.

And so I'm signed on as our master account here.

And I don't understand how this whole thing works.

So if you guys will forgive me that it looks rough it.

It is.

And I don't quite have a full understanding of it.

But But let's say I want to provision.

So So this is you know it creates the master account an archive log archive and an audit account.

Now let's say I want to provision a new account.

So this is I need a new isolated account.

So I fire up the management console and it basically does the assume role within this as the administrator.

And so in order to actually go provision it a new account.

I came into this.

And I was like, OK.

Well, let's go to control tower because that's what I just installed right.

So we go into Control tower you muck around in here for a while and you realize this is just the you know you can brew peruse all of your organizational units and things like that.

But it doesn't really give you.

This is where you can configure it and look at all the guardrails that you know that they and guardrails are their way of saying this sub account that you create had these different restrictions placed on them and they have some standard guardrails that you can't create public read access to log archives and all this other stuff.

So you can really drill into all the restrictions that you place on these sub accounts.

But after a while, if you're trying to create a new account.

I finally realized, oh, I don't do it here.

I've got to go to a different Amazon product called the service catalog.

And so the service catalog is actually where you can launch this you know this product.

And so this is an apparently there they're setting this up so that you can create different portfolios with different types of products.

I'm just using the standard product.

And so if I want to create a new account.

I've actually got to go in to the control tower, and actually launch this product.

And so I'm going to I'm going to walk through a hypothetical I won't create it at the end.

But let's say I want to create a test account.

And now we start to get where it's interesting.

So the single sign on e may l this is using their single sign on corporate directory.

So I know I could use my you know might screw it in my test email.

Now, the next thing is when you're creating an account.

One of the things with Amazon accounts every Amazon account has to have a unique email.

So you cannot reuse an email, or it will.

And if you do use an email that's already in use it will go through the entire CloudFormation process of setting this up and then hope you an error saying, oh, that email is already in use.

OK to further tell you how rough around the edges.

This is if you hit your Amazon limit, it will go through and try to create your fifth account and just give you the very helpful message.

Internal error and it took me 50 minutes of googling when somebody finally said, oh, yeah.

You need to increase your account limit.

So you know what I've been doing is I've been just creating this adding something extra to my email to create it.

Right And so now you do have the nice feature here that you can reuse your same single sign on.

You know you can standardize it on your single sign on email.

But and have a different account email for the different account emails that you're setting up.

So you can have like one single sign on and free accounts by changing you know whatever you.

Let's not do this period.

So you know and then you know whatever mind my test account is you can tag it you know as you would expect you want to have s and s topics to be able to and s.

And then you now can launch this, and this will after about 15 minutes of chugging it'll send an email to this this email and say, here's how you log in.

So let me go ahead and log out of this.

And I'm going to go back and actually sign all the way out and almost sign in just as me as a user.

And so let's sign out.

And by the way, I have not signed gone in and set up my MFA for this.

So right now.

I am just doing this.

So when you sign into this new account.

It's not even the standard Amazon look and feel until you drill into your different accounts.

And so this is my account that I want to sign in as my admit.

You know as an administrator and now I'm into my account.

So that's how control tower works.

They've instituted this kind of strange UI that is sort of confusing to manage it, especially when you come at it from the past experience of using us in some ways, it is cleaner.

It's just that it's different.

So one question I have.

And I don't know if you know the answer.

But with AWS counts we use provision and we even like using Terraform or whatnot.

And you specify that email address each one of those accounts actually has a master account associated with it.

And if you don't go and do a password reset on that whoever does can now basically initialize that, especially if you're using a role account like a role email address.

They can now do a password reset on that set up MFA and basically get you locked out of the root account at the master account level.

Have they done things to prevent that yet.

No no.

All this is really doing is really managing your data.

Yes organizations for you.

Yeah And so each of these accounts that you set up is a sub account, and it is you know that that's all it's essentially managing as part of that.

Yeah All right, let's.

That is interesting.

So thank you for doing that.

Anything else is part of we want to show or.

No, not really.

I just it's you know if it works, it takes some puts and around.

I wonder now if it would have been better to go through with Terraform you know, for me, I think, using form and kind of the reference architecture might have been easier.

But this in the long run might have been better for other people to kind of step in and start to manage.

Right oh, you know, I'm almost of the opinion that this is you know, it's good if you're not a tier form user, but it is definitely not turnkey.

Yes it's got to work through it and kind of figure out what indefinitely click jobs.

Yeah Yeah.

A couple questions I attended a little bit late, but it looks like they Stole that UI right from one log in because that's one logins idea of, here's a list of rolls.

You have access to.

How do you.

How does it know what roles are giving users access to.

That's a good question that I ask that, because I'm curious if you could set up the so outside of their control tower, and still and the like still provisional your accounts and everything with Terraform you'll make the roll maybe it's just a tag on a roll or something that gives it access that it'll show up in that menu for people that that's a group that's a good question.

I have not delved that far into it.

So OK, fair enough.

Yeah And you are you already had your master account set up with some level of single sign on or no, actually no.

OK I started with a fresh account and it's set up single sign on and everything is part of installing terror.

I mean, not the control tower can you just authenticating a guest Active Directory or somewhere that.

Now this is actually authenticate.

You can.

But when it's set up single sign on.

It's set up, it just an Amazon single sign on.

OK single setup.

So bull well.

So then that that's actually a nice segue into this.

It was posted in office hours earlier this week.

Just wanted to call it out to everyone else, this is interesting because if you're in l.a. and you went to the west l.a.

DevOps meetup there was this was hinted at it was coming.

So AWS some professional services team inside of AWS together with hashi four has been working on adding some support for landings and landing zones are kind of like what control tower provisions.

So it's not the control tower piece is that it's that sub piece.

And apparently within the enterprise offering of Terraform cloud.

Unfortunately, because it's a integrating with sentinel they have some support.

Now for landing zones.

I haven't dug deep into this.

But if this is the stage that you're at.

It's kind of interesting.

Maybe to be aware of some of these things going on because they could have long, reaching impacts on provisioning or AWS accounts.

But what's exciting about this is that there's a general movement towards vending machines for US accounts and some of the interesting core concepts that it brings up here is that you have your core.

Sit of accounts, these are the ones that you initially set up.

But then throughout the course of your organization's life, you're going to be adding additional accounts, perhaps for developer sandboxes for individual apps that have different compliance requirements and making that more turnkey is the idea here.

So Yeah, they talk about your core accounts and baselining the settings for those.

And then also your baseline security period has anybody gone deeper on this than me has some interesting insights or things that they got out of it.

I just think it's a little bit funny that they should choose the vending machine matter for a number for a number of reasons, but one thing that comes to mind is Steve Gibson on his security.

Now podcast vaguely familiar with that definitely heard of it.

I haven't been telling him Oh, it's in his 13th year.

It's an institution in the DevOps community.

I mean, like old school older school shall we say anyway, he's got us a couple of episodes where he talks about the problem of securing the vending machine.

It's sort of a it's one of those holy grails of the perfect.

OK Figuring out how to have something out there can dispense assets in a secure way.

But yeah, I can I guess I can see that I also like challenge that because like through automation and repetition that's how I think you achieve greater levels of security and predictability.

So therefore, with this vending machine and model that if you can get it right.

You can at least stamp that out.

But if you make a mistake in that template.

Yeah Then you've rubber stamp that out to a dozen accounts and you have that problem.

Somebody saying super is the top bezel oh no that's unrelated to before sir the sidebar conversation here is on the laptop with the camera in the lower corner of the screen.

You know that that would be the most exciting part of what I talked about.

I have a hard stop to it for you, Eric.

So I'm OK.

No worries if you got a drop off.

I totally understand and thank you for stopping by my.

Yeah, no worries.

All right.

So other exciting newsgroups that was jumping ahead of ourselves here.

Yeah, the exciting news is the cast managed no pools as we all know and been frustrated with Amazon's case was crippled from the start by only managing the masters.

Now that's no longer the case in life with GKE.

You can spin up manfully manage load pools.

Now It remains to be seen how our life has turned out to be in the beginning.

And what kinds of issues people will have.

But at least we finally have representation of us.

Yeah, I just noticed that you've been I've been reading about this picture.

So yesterday I read about it.

But I think it is kind of like Cuba and or just Kailash.

Great name instances.

But right now.

I think it doesn't support the poppy stuff yet basically.

Yeah, that's what I think though on my setup like you know I always use the spot.

Businesses to do a some no love love money stuff like, no, it's not important.

So I can't go away.

But here, I think it doesn't support.

It's the spot instances yes because there's no option in the CLI or no option Terraform I've checked the telephone message by the way, I'm very surprised that actually Terraform acted very quickly on at the same time.

Maybe it like no threat was later for the first time that they just released that feature that they did release moms like, no, not even more than 24 hours ago.

Yeah Yeah.

That's really cool.

I think that there's more and more collaboration first of all between AWS and how she caught it.

So I would expect more of that happening.

Aaron sorry I'm having a desktop issue here.

I'm trying to get any anybody have any questions related to Terraform Kubernetes helm general DevOps automation release engine logging SRT Prometheus Grafana you name it.

One input from me is that I think the control towers.

There's still no option to activate or provision.

This is MFA I mean, you should do it manually.

But it's not only the control tower lets.

Is that true.

Because I looked into that.

But yeah, you can do it, man.

And afterwards.

But not with the product or not with the service they provide.

But now it still needs to do some kind of stuff.

Maybe even with their own MFA stuff.

But right.

Or for example, third party integration.

Like OK, whatever.

You cannot do that.

I think.

Yeah, that sounds interesting.

Anybody know about that.

We lost Mike.

So anyway.

Yeah, thanks for bringing that up, I'm not sure about that.

But, you talked about using a spot instances.

And like that you don't like that, not a pool don't camp or not group.

Yeah, for the kids not groups.

Yes, I think it's hot stuff.

But it's still a chick sport teams don't call service.

What Yeah.

But I wonder if I.

That's a good question.

I wonder if spot has already responded with something about.

But I think I'm thinking the fundamental underlying technology doesn't support it yet.

Eager I'm not sure, because this literally was announced like a day or so ago.

Yeah but what is this.

I always go to Terraform resources you know.

I just came around that e and I couldn't able to find a note the directive saying your spot instances yes or whatever.

So there's nothing I you only give the Sti size no minimum and maximum.

That's it.

Can you tell me what.

So what is the difference between note groups of excess and let's say, of scale.

Group with lounge chair set at.

So this is like GK Now or Azure or Kubernetes.

You don't have to regress as an old serf community that's right.

That's their main difference.

So I think maybe ego.

I wonder if what you're asking is or I should say maybe what would answer your question is when it s manages a group.

It updates like the symbolic links to security groups and all of these other metadata things that go beyond just the auto skill group identity that what you're asking probably right.

I still don't get or doing so.

There is no group an auto skill group is like a it's like a list of machine types with some parameters around you know their properties.

But it's not.

It does not include these the access control lists and the security group memberships for a particular cluster.

Yeah, exactly.

You have to provision all the security groups.

You've got a provision the auto scale group like this.

So this is our Terraform made up.

Yes case workers module, which is what we have been using to provision the node pools for cats.

And then this lets us specify a few dozen different parameters for that.

Now instead there is a first class primitive eight of yes yes.

No group that replaces our module basically.

So instead of using this module to provision your workers you can now provision this this resource instead.

And that's that maps once one to a resource managed by AWS of course.

Now we can.

It looks like there is a lot less configuration options here.

So it looks pretty basic by comparison.

If you want really fine grained controls, then it looks like you get a lot more of that when you're using the raw underlying technologies.

The building blocks like auto scale groups.

Is it true.

Eric or it is it might it be the case that all of this ultimately translates or boils down to some photo or photo three primitives in the one in bodo is just building on the APIs that Amazon provides photos like Terraform in the end.

I mean.

Oh, I see.

I see.

I thought botha was essentially OK.

Yeah Photos just a library for automation.

And then you know how she corp. uses the eight obss decade to achieve something similar.

So yeah, you can say related to that though.

So what's interesting is like back in the day, like when they first announced case there was discussion of each case farmgate.

So Fargate style Kubernetes, there's been no mention in this that there's any farm gate relationship to this.

I'm guessing it's all classic.

You see two instances under the hood Erica CenturyLink.

Can you open to share as on your screen.

Yeah, well, first of all share you share the links in office hours on suite ops.

That way everyone can see it.

Oh, sorry.

I'm sorry, wrong link.

All right.

So spot spot is.

Can support something similar.

Probably it would be good.

Like for me to make a demo eventually.

Mm-hmm about it.

It's not the same as node group.

But probably covers a part of its functionality.

So first of all, from your eye it might have integration of these Amazon cast but from Terraform point of view, it have some resources.

I'm not sure how it like interact to is from your point of view.

But if you will look here.

So it creates and spot things ocean on us, which means you can set whitelist at list of instances that can be started in as a bottom of the scale group or something like that spot the ocean in a specific subnet or you can specify I think a little bit more context is do though.

So sorry guys.

We've been doing a lot with spot in the last two weeks.

This is makes all sense to us.

I just wanted to give a quick back and though what spot in Stockholm is so many of you, I'm sure very familiar with the concept of spot instances on AWS basically pre-emptive all instances, on a marketplace where you can bid on compute.

And so long as your bid is at the market rate you get those compute resources, but they can be terminated at any time unless you reserve that for some window of time from one hour up to six hours now with spot fleets Amazon has made it easier to manage pools of spot instances.

How ever.

When you look at the fully managed service of spotting it makes the spot fleets and all that stuff look like child's play spot and does a very good job at making it clear how they're saving you money, how you can save more money giving you visibility into your Kubernetes clusters what's costing you and helping you optimize that stuff.

Now I'm not ready for a demo on spot today.

But we will have one guy in our future office our session probably led by Igor here who's been doing a lot of our work on this.

So yeah, that that's just an intro into that.

Now what Igor is talking about right here is the Terraform provider for spite angst the managed service to provision all that stuff as code.

Yeah So a general idea is that you integrate spot instruments your idea loss account and carbonates cluster and zen proteins to manage creation of nodes and adding zam into your carbonates cluster to have the lowest price.

All like best availability with the lowest lower price.

And if you spot instances when a price goes low as an on demand if spot price goes up, then the worst case, you will have on demand price.

And so that can be very useful to a case that you describe.

Some of the other cool things like spot ins will do is it knows the probability that instances will be terminated and can preemptively start cordoning nodes and draining them and moving pods to other nodes you have better availability.

And you can also, if it deploys a service inside a community.

So it knows how many core are requested and how much memories requested.

So it can actually right size your instances on the fly.

So that you're not over provisioning machine types.

Plus plus using like the concept of fleets it can deploy lots of different kinds of EC2 instances with different bids.

So the chance that you lose your entire fleet at the same time is very low.

And then there's a lot more behind the scenes.

I think that that goes into this product can do.

You need to buy it and you need to buy that separately.

Is it like wasabi or can you actually just do it through your straight regular w.s.

It's a separate SaaS you sign up for it you go to spot inside IoT and basically it's free for what is it up to 20 nodes.

I think.

And then after that, you pay and you pay and any charges you basically 20% of the savings.

So depending on how you look at it, it's basically free as a product.

My god that's brilliant.

Are they hiring pretty brilliant model.

I wish I came up with.

Yeah, I wish I was my product.

Right let's see what else we've got another about 10, 15 minutes here to answer any questions if anybody has joined and has questions related to Terraform communities.

Helm helm 3 is out.

Anybody using helm 3 by the way, I should have had that on here.

How is that going you're muted by the way, I think it works.

It works great for me.

I was using him to learn before.

So local to her.

So it didn't change much.

Yeah Did you migrate existing workloads down 3.

Not really.

I just tested some stuff in it.

So I just tested on a new test server it just works.

OK this stuff works.

I read it.

Did you see the update.

Update for her I'll notify her.

Yeah OK.

Yeah OK.

Sighs I was just Pippi out here a little bit.

Hey, guys.

Peer here is working on an open source product is really cool.

It's called Hell no to fire.

If you're using Kubernetes and Hellman haven't checked this out.

It's a great way to know what is happening.

So if you look at any chart, you can compare multiple versions of that chart and compare that you have to take.

Yeah Now, you can prepare.

OK So I say the UI has changed a little bit since I looked at it last but here's what's pretty cool.

So if you're currently running, say link or Grafana version 3 to the left.

Then and you want to upgrade to Griffin version 4.0 what's going to change.

Like it's totally opaque today.

Well, not with how notify or with how notified you can see all the changes in this case quite a lot, because I jumped major versions here to exaggerate like the changes.

But here's everything that would happen if you went through that change.

So let's think about the ecosystem system.

I love it.

I think it's amazing.

Yeah, it works.

It gets a search function back.

I search the front end to view jazz.

So it's no.

If you add somehow as a compare function was skipping you to find out, which is kind of weird.

So I will just find out my go to test tool kit test case.

So we'll have to check where that is.

I just want it.

I have a question.

I guess.

So let's see how it was.

A diplomatic way of saying this.

Let's you say that I there are a lot of things that our product team.

I think overlooks and neglects.

And so I'm kind of running a little bit of a shadow development organization within the company.

One of the things that one of the things that we have a health chart to install our collector and I wanted to basically clean it up a little bit and kind of make like an exemplary help chart you know kind of like how like until Ben and Tim Blanco's Terraform stuff is always like pristine and fabulous.

I want to do.

I want to be like him kind of like, yeah, there's room for there's room for more more excellence always.

I see what you guys doing.

Where does one go to learn how to like build beautiful home charts.

I think it's hard to say anything in the chart realm of helm is beautiful.

No, I. Yeah, I would like to do a shameless plug for our monocytes sharp mono chart, but I would say that the only thing.

Beautiful about mono chart is it has proven wonderful for us in our consulting and to use.

So we don't write as many charts, but if you look behind the scenes of mono chart mono chart is a master class in how to use helm.

But it's ugly as sin because it's hell.

So let's look.

Yeah, I can just show you guys quickly what I'm talking about there.

Yeah Yeah.

Did you go to cloud posse charts under a beta we had this thing called mono chart and the reason mono chart exists is we just discovered that there's very seldom justification for developing a new chart for most apps so long as they follow a pretty standard interface right.

Whereas oh so instead, we think about charts as interfaces and the chart is like the interface is like one interface to rule them all if you go in.

So here's an example of how it can be used in our values file here.

So for example define a Docker config with these settings.

This is going to be the image for our chart.

Here's a config map it supports all the best practices of supporting annotations.

It also supports mounting your config maps to the file system.

It also supports defining environments and files.

So it supports the three most common ways that config maps are used then help.

And it repeats that for secrets and are added for inline environments.

And then we support the common primitives of deployments and daemon sets et cetera.

So it's kind of like a dumbed down the description of a Kubernetes resource the way that they're most frequently used within help.

But now if you dig down into templates and then you look at how this looks behind the scenes to template size of this stuff.

Yeah, it gets pretty gnarly and you know this is ugly.

Like I said ugly as sin to templatize all of that stuff.

But it is wonderful because I think in a way.

Part of the problem is Helm it.

There are little decisions that people that humans need to make that they're not generally well prepared to do.

And so if you could automate those decisions.

Exactly right.

You know.

So this is what I I totally advocate that companies do.

So if your company is developing microservices and you have like you know, 15 different microservices and you're advocating developing 15 different charts for those services.

It really begs the question, is that necessary.

Because detecting the differences between all those charts is very difficult.

So we use motto chart all over the place to deploy services in our own things.

So Cloud bossy.

Files I'm sorry.

Go on.

If I send you a link.

Well, it was an example.

Oh cool cool.

You sent a letter home.

Ice so Yeah.

Igor in the office hours channel presented a link on how you can use monarch chart to deploy services on Kubernetes.

But here just a quick Google map.

Google Google's now been synonymous with searching.

I'm not even on Google here.

I'm on GitHub.

Just a quick search on GitHub of people shows you all the different places that we've used monarch chart in place of developing an original chart for you.

Basically just passing by is down and the moonshot has halted configuration.

OK, that's cool.

Yeah, exactly.

And in the extreme case if you guys haven't heard about it.

There's this thing called the Ra chart in the right chart for a group of 4 helm is the ultimate escape hatch.

It just lets you embed full on cougar 80s resources does values.

So now you can use the helm as a package manager.

But all these other raw resources that you want to deploy.

This is like when maybe a vendor gives you some raw resources.

But you use helm and you want to see that.

So the raw chart is is that.

So mondo chart is somewhere like the raw chart.

It's just that it's a whole lot more opinionated.

What I can say about how to do charts in general is when I teach art.

I talk about it.

It's all the time, just like uses linear templating as possible.

It gets super confusing if you have to maintain it.

I mean mono shots are a nice idea.

But in general.

I mean, we rarely really use a lot of templating so we just like do really, really basic stuff going forward.

If you're doing something little drips.

You're doing something else.

This module.

Well, yeah.

Maybe a long lines of raw almost or you could use wrong.

That's a you could use raw as your what do you call it as your chart dependency.

And then pass your values to right.

Yeah Yeah, maybe.

I mean, I'm one of those tribunal has eight lines.

So it's super, super basic.

What we try to achieve have you tried just that based on that, I have a case on it.

OK So today sort walks her legs and you take her bar.

And as a one things that Sarah come into stop doing is template and you smile and say, I just described set how the scene is at the helm doing is not really cool to play the piano and they suggest to use on net as a special language for templating.

Jason and I guess so.

That's not a surprise that gamble is a super setup.

Jason So that serial is the same.

And as I get home three stops supporting different template languages and probably it would be initially, there supporting Lua and there was discussion back and forth.

If they should support Jason it like I'm on the fence like Jason it is like a leaf in bounds nicer to look at them say CloudFormation but I still don't like that as a developer.

Maybe I'm going to take me a while to warm up to it.

So I do kind of like the Lua the resolution to use Lua over Jason at Indian but I mostly Wah in said chase and that really requires to write.

So I'm not sure that mother chart written on Jason it would be looks better than on yellow.

Yeah or.

Yeah Yeah.

I don't know.

I don't know.

I think where it where the power might come in is in the ability to create basically libraries to generate common kinds of key functionality behind the scenes.

OK has anybody tried the Lewis stuff with home three.

I mean, is it looking at here now here.

So I say to yourself say that one more time.

I mean dirty is tough sell off on Wall st.

It's all said I couldn't stand it.

That's fine.

No All right.

So yeah, this is old.

I'm not going to bother sharing.

Now All right.

Well, then that pretty much brings us to the end of today.

Thanks, everyone for coming out and sharing sharing your experiences.

Mike Rowe for sharing your experiences with control tower.

We will have for next week.

I think ego is going to join us at you are we saying something.

No, no, no, no.

So your next week.

I think is going to be talking about the ThoughtWorks technology radar.

I actually think we're going to be picking off a few more things off that ThoughtWorks technology radar.

It's really cool.

They do a pretty good summary of the technologies out there and what you should keep an eye on.

So anyways thanks everyone.

Talk to you next week. same place same time.

Bye Bye bye.

Public “Office Hours” (2019-11-13)

Erik OstermanOffice Hours

Here's the recording from our “Office Hours” session on 2019-11-13.

We hold public “Office Hours” every Wednesday at 11:30am PST to answer questions on all things DevOps/Terraform/Kubernetes/CICD related.

These “lunch & learn” style sessions are totally free and really just an opportunity to talk shop, ask questions and get answers.

Register here:

Basically, these sessions are an opportunity to get a free weekly consultation with Cloud Posse where you can literally “ask me anything” (AMA). Since we're all engineers, this also helps us better understand the challenges our users have so we can better focus on solving the real problems you have and address the problems/gaps in our tools.

Machine Generated Transcript

Then with that, we will kick this off today is the 13th of November, we're doing office hours here for the sweet ops.

Unlike other times we actually have an agenda some talking points.

We don't have to cover all of these things today.

But if there is these are just to keep the conversation flowing if these things stand out Obviously, the number one priority here is for us to answer your questions whatever those may be.

But here are some things that I just wanted to talk about.

So the first thing I'll just cover is about some suite ops.

If you're in our slack teamyou'll have seen some activity there.

Basically what I did was I renamed the general channel to announcements because on a free team that is the only channel, you can restrict posting on.

And it's only channel.

You cannot leave.

And then I created a new general channel invited everyone there.

So that's where we can now have conversations related to everything else about DevOps.

It doesn't have to be topical like all of our other channels are.

It's also a good place to ask questions like, if you don't know where to ask a question.

All right.

So with that, I'm going to turn the mic over.

Open the floor up.

Anybody have questions, problems that they're dealing with Terraform Kubernetes helm.

Interesting news.

You've seen that you want to share.

Do go.

Go ahead yourself and talk vigils.

Did you also change agreed.

But because I notice it's changed the logo on it.

Least are paying for it.

You know.

So I started using Freeport before I became like a master of zappia.

And now it's just easier for me to use that beer than it is use greed.

But look, they have a good product.

And you know I want to support them.

But to do what I want to do is that I'm good at because we're actually copying that, and we started using Reebok in the company.

Oh, and when I saw that you are you actually started using your own logo.

I said to myself, OK, we might need to pay for it because is this kind of looks cheap on a company.

I hate I hate the Greek logo period to be honest, I really do.

I don't mind.

I don't mind it.

We're actually thinking of naming RRR next.

But after one of the guys that it's leaving the company now.

Yeah And a match to him because we want to start using Fogel as well.

But the words were just to see if ever we develop our own bot or we start using theirs because it's about $5 a month.

So it's not that.

So it's not for focal heavy met.

I think it's Vlad had you rejected.

Not yet.

So it's Vlad shoals Berg.

I think he's the one of the founders of vocal and he's en suite ops.

You can reach out to him if you have any questions.

Awesome Thanks.

Yeah well.

Any other questions.

Oh great shot.

I'm sad that the logo doesn't show that well in dark mode.

Oh, that was a consideration.

I just want to say.

And I was running dark road for a little while.

And I still do it on my Mac to be cool.

But to be honest, where a lot of things.

I just think it's I find myself constantly squinting at things.

So I've gone light mode for Slack.

We also implemented today are ss within Active Directory connector in our billing account.

So we're starting to actually having to redesign all of our policies and group.

I am groups and roles.

Fred yes.

So here's a guy that actually did it.

Brave dealing with all that I am.

Yeah Yeah.

Very, very lonesome work because you did it.

You did though all by himself.

So the process of that, which is not bad.

But you know, the more I like when I talk about stuff.

And I realize this is a little bit extreme for some, but I am.

It is great for managing your services, but you should almost never even be for humans because everything you do is going through your get UPS workflow and you're actually using your governance model for your source control and how powers are applied to regulate what actually happens.

And that eliminates a whole class of problems managing complex iam rules coming up with a matrix of groups.

And IAM permissions and policies that you need.

Right the thing is the thing is that even know we want to go ahead and eventually migrate the whole company to that that did structure based on using Terraform and either Atlantis or geodesic as we use it.

Right now we have a huge chunk going to company that it.

So I can not make an application in a single IBS account.

So until we migrate that monolithic account and slicing up.

I we will have to do this.

And besides also some of the developers still find useful.

The good tool of being able to access the tardis console and looking at logs and things like that.

So speaking of which did you see the post I forget who it was shared in the yes channel today.

It's really cool.

It's how to detect non content.

Like it.

Yes, we've had constant requests and alert on those which is kind of cool.

I think Lauren Lauren like money many elections.

Yeah, I had a meeting with some of the guys from the US and there's not much I can divulge.

But be alert of the rim Ben that is coming up because there are a lot of things related to import and that we will all be very happy to use this.

Basically, if you interact with a lot of VPN as for providers and other people.

Yeah Are you going to read them not myself.

OK One one of the other managers is if it drops out, I will be going.

And I might also be absent from work and appearing in the event anyway.

I don't know really.

I don't know yet.

But I am not yet booked but I do.

I do want to go.

If I can't.

We just our son was born a couple of months ago.

And my wife will kill me if I am gone on my own.

Is how the other two bonds two months.

Oh no.

The other cool thing I got.

And the other cool thing I've seen is you will somehow register all your resources that have been provisioned using your blessed method, whatever that is, whether it's tags or some other method Terraform state.

And you have a system going.

Sure checking every five seconds or whatever.

And just automatically deleting right.

Anything that's not.

I don't think.

I don't know anything about that.

You know use config and then you can have policies that enforce that, which is really neat.

When you put it together with database labs.

Yes benchmark.

So somebody brought a prowler which does something similar to this.

This is similar ADA.

Yes security benchmarks it's a.

It's implemented with CloudFormation and you can just deploy this to your account and it sets up all the continuous monitoring of your rules and stuff we came up with.

That's why we have a tear from module 4 CloudFormation.

So we could deploy that.

Yeah I'll share these is that kind of what you were thinking about Andrew or I don't know the underlying technology.

He was just telling me about how you know they could just anything that wasn't provisioned using the blessed method would just automatically get deleted immediately.

Oh, actually enforcement of the Yeah.

You can't.

I don't think you can do enforcement of it.

Yes, I can take.

I think config only will show you how compliant you are against your policies.

Not sure if it prevents.

Oh no his was his was Yeah.

His was active.

It was if you create something in the console, you know like this thing that somebody posted earlier where it shut out a Slack message or whatever.

Instead of shooting out a Slack message, it would actually just delete the resource.

OK, well, this is a third.

This is a software or open source that he's deployed for that.

I don't know.

I doubt it.

It was for a large financial institution.

So with ADP a service catalog, you can define the services that are permitted for native US accounts and anything that goes outside of that is not possible.

But it was too.

It was to enforce you know OK.

We have a policy of everything must be created in Terraform or whatever.

Yeah you know using our using our corporate you know workflow pipeline or whatever.

Yeah And so you had to have you had to have your.

They called it the.

Like break glass credentials where you know you could get in there and fix shit if things went wrong you know.

But you know it had this thing running all the time going it does get created using the blessed process.

No kill it.

That's really.

Yeah, that's really cool if you fuck you ask your friend.

What they're doing for that.

Yeah Yeah, I think jumps out at me.

I don't have anything for that.

Anybody else seen anything cool related to that policy enforcement right.

Well, let's see here.

So going looking through the agenda here.

Big guy.

Yeah quick, quick thing.

Those Kubernetes that have been cursing at helm for the last few years about the tiller and the security implications of that can finally put their finger down.

And Hillary has been officially released.

So that's cool.

I believe Helen file.

Now does support it.

But we haven't we haven't yet.

Kick the tires on it.

So I will confess to that anybody using held three yet because the beat has been out for a while.

No, I'm still I'm still on like hell I'm to dot forever ago because my company is slow.

I was looking at it, I was looking for something to give a talk on.

And I think you just gave me some Excel.

And it's a good excuse to get me exploring on three.

Yeah, I believe that's it.

I just posted it.

Oh cloud custodian that does ring a bell.

Let's see here.

No, I haven't started.

OK, cool.

OK So since they open source it and everything I can tell you.

He worked at Capital One and they were doing it at Capital One.

Capital One was using cloud custodian.

So Yeah, they go.

They open source.

That's on a lot of talk about it.

That's pretty legit.

Very cool.

Thanks for sharing.

I going to put them up speed UPS office hours by the way, if you guys haven't joined the office hours slack channel in suite ops yet.

That's where we're sharing all these link.

I have a question.

The to do any of you use any self service password application that you might want to recommend related to these 80 active every single sign on that we're right now, just trying to implement.

I have a current I currently have a BHP application that I have that allows me to reset passwords on Active Directory, but rather not use anything like BHP if I can avoid it.

Yeah, so we haven't used it for this use case yet.

Keep look, it's pretty awesome.

I don't know if I don't know if Qi cloak supports password resets on Active Directory.

It's closed.

Let's see here.

Yeah Looks like there's a solution with this.

So I mean, it seems like using Qi cloak would be a pretty legit opportunity.

So you know we use Qi cloak for our IDP gateway it looks and this is it base on balls.

So Qi cloak is by Red Hat.

It's in Java.

It supports every IDP imagine every other IP imaginable.

So if like I go to Portal here and log in.

You see I'm presented with the login screen.

This here is for Active Directory or whatever custom back end, you have.

And this here is for single sign on and you can integrate it with any number of a single sign on provider.

So it's cool here is our customer that manages their end and we manage r and OK.

And we have to have our own server for it.

Or is it.

So yes a key look is I said, open source looked at work did you run it yourself.

If you want a marriage service would you like to Yeah.

Because what you thought about October or 0 0.

The only reason for using zero was that it was it's made right by a guy from Argentina.

That's where we're from.

Oh cool.

Yeah Oh, that's it comes highly recommended.

That I think is the 800 pound gorilla in space you know.

So I keep close comes into play also from an economics perspective.

You know.

Especially it gets very pricey key cloak yourself host and self manage.

I have no idea what you're increasing the attack surface obviously by using open source.

Off the shelf stuff.

But I haven't been critical CVD for some time.

And it is supported by Red Hat.

Make maintain.

OK Thanks I have to get you released to have regular people looking for it.

Well, for what it's worth you know like everything else, we have under a cloud a posse helm files we have also are distribution of how we install key cloak.

So this is a way to get started faster.

So what's cool with using key cloak as well is you can use it together with a gatekeeper and gate keeper becomes your identity aware proxy and then you can secure all your apps behind that have you felt about gatekeeper.

Last time I looked at it, it felt like a dumpster fire painless.

I don't know.

I mean, we haven't had any problems with it.

It was, as with everything.

Open source.

The challenge is initially getting everything working because documentation is out of date or nonexistent or conflicting.

So once you find that once you find a working recipe for it.

What's really well.

And we can just.

So this is all behind gatekeeper right now it's all protected by my single sign on it.

And that includes the Cuban ID dashboard.

And this is using my role with gatekeeper to authenticate through key cloak to the Cuban ID dashboard and the roll bindings there to map my ADP role to keep to Kubernetes.

Yeah, we're seeing one of those fun dropped.

Really good.

What was that one of those fun drop Downs that I don't have.

Oh, this is the latest version.

So if we just updated key cloak sorry, folks or rather forecastle to the latest release and then you can associate metadata with the services in the portal.

What's also cool about the latest release of forecastle written for castle but pronounced forecastle is you can now have a C or D So you can this whole menu is managed using C or D for forecastle and that's cool.

If you're using things like Istio and virtual services or virtual gateways and you can't use the same ingress annotations that forecastle is designed to use.

So now you just deploy the 4 castle app CRT and you can add anything you want to the menu here.

Which is cool.

Also for like external services and things like that.

So clicking on this takes me out to page do.

Any other questions related to that.

I was a rabbit hole there.

I have a question about this and that guys.

My name is Rocco.

I'm from Romania.

Thanks it's my first time. this them.

Yeah, good to have you.

Thanks something now implementing what I was doing about a man in the Navy.

Yes And we'll go for that.

We go production later next year and not get using the North region in Stockholm and that they.

That is available.

Yeah, I'm not notable to have more easy on the cluster.

And the reason that I need multi g at least for now is only to get the staging of programming quickly as possible for the logs because we have logs from engineering.

So from application logs some and we are sending depending on which application of the logs.

But assembling them from the elastic index.

Gotcha OK.

So it sounds like you're not using Kubernetes.

First of all.

Yeah not OK.

It's supposed to be OK because like if you were using humanities then what I would just say is like I used one of the fluid d exporters.

We have the exporters to like Elasticsearch.

We have it to day to dog we have it to Splunk sorry Blaise we don't have it to Sumo Logic but.

But yeah.

So using fluid basically is what you're going to want for your logs right to stream those to one of the.

What do you want to get those off the servers.

You shouldn't care about the servers.

So my approach.

Now for a quick solution was OK.

Just go with the worker nodes in the single easy, so I can create a consistent volume and mount in every pod and have a sidecar container with locks.

I already have them all interconnected curation problem lock.

So just read the logs send them to Elasticsearch on a regular date of hour.

Yeah That works as well.

I mean so.

So we say engine x are you talking engine x ingress or are you deploying engine x separately inside Kubernetes.

No, it's simply interesting.

Well, OK.

So you do have another option, though you can configure engine x to log to STP out.

And then you just use the whole back plane of the Cuban Navy's logging and then you can use the fluency stuff et cetera.

Yeah but they had a problem with the thought that they were putting out a real end to water on the lines not pursue the line.

So that was my biggest problem when I did the test with sending everything to the out.

Yeah, I hear you.

I know what you mean and I've seen that sometimes happen.

So if that's a deal breaker maybe it gets complicated.

The other thing, though, is fluency and so log stache you know was used to be the leader.

It seems like everything's moved over to f So it's now the F back instead of the stack.

Yeah, and I'm not sure if Flutie does a better job of handling multi-line joining those entries.

I know you can write custom filters and we've had to do that for customer applications that will join those lines.

But OK.

But what you described.

It just sounds like more work.

But if you have a working.

Hey, you know, that's the hardest part right there.

So if you fix it yet.

It's not now.

I will go with the solution and see the future, how to change it.

What's the biggest problem is it also needs to do a lot of standardization on blogs before sending them to plastic.

Yeah Yeah.

So we did that as well.

I don't know if I have an easy example to pull up here that's open source but Yeah, basically what we wanted to do is take a structured log data generated by custom applications, specifically written in Rails using the standard logger there.

So its key value pairs.

And we wanted to cast that into JSON and structured data for Elasticsearch so it's easily filtered.

And that was pretty straightforward to achieve with it.

OK looking great.

Yeah Oh, yeah.

What would they recommend for monitoring.

Good combat in this cluster.

And storing the data for a long time.

Like the data from the last one.

Yeah Yeah.

So definitely a few options there.

If we're talking open source, then the de facto answer is pretty much saying Prometheus and alert manager and using all the exporters with that refinery.

And if you are using that then how you manage Prometheus is a bigger question right.

So Prometheus itself is a memory hog and when you query it that could take down your luster.

So basically, the reference architecture for me Prometheus is to have a hierarchy basically each node runs each cluster runs its own Prometheus but manages a very small retention period.

And then you have a centralized Prometheus, which aggregates the Prometheus of all the other nodes.

If you require like extreme precision you're going to lose some precision with this.

But for most people and monitoring that doesn't matter.

And then this is going to get you what you need for long term retention.

Prometheus now there are other considerations.

There's Thanos which is open source data store for Prometheus.

We don't have firsthand experience deploying it yet.

So I can't tell you how painful or non painful that is.

But that seems like the way to go.

If you need to have tremendous scale for your metrics.

But can I say no glass or some data from September yes or so.

I'm not quite sure what you're referring to.

So So basically, you the companies metrics APIs got the heap stir.

You have all these services that export data and a Prometheus form, then what Prometheus does is it scrapes that.

So Prometheus is running on it just on a certain number of nodes in the cluster.

And then you have hipster hipsters that one is one of the products that gets metrics out of the cluster out of Kubernetes and how the scheduling is doing.

And then you send that to Prometheus and the Prometheus sends that somewhere.

So where does Prometheus store its data.

So Prometheus supports a whole bunch of plug a while back ends for this.

OK Yeah.

And the default or the fastest 1.

Take it up and running is just local file system.

But then that causes problems right.

If you want an easy.

And I fail over.

So to be honest what we do for smaller installations with Prometheus is we're using EFS and that seems to be working pretty good right now.

We don't know yet when that's going to fall over.

But we have a few things in our toolbox to deal with that.

One is you know provisioned AI ops.

So most like the most likely reason for having problems with running Prometheus on a fast is going to be problems with throughput.

And we haven't seen it yet.

So we can dial that throughput way up before we invest in coming up with a more sophisticated back.

I'm glad to hear you say that, because we're doing the same thing.

Nice We were we.

Everybody goes.

I don't use you know nf S, which is what he offensives right.

I'll use that effect for your post gross databases don't use manifest for you GitLab you know persistent storage.

Don't use that if that's for this and that never other thing.

And we're going well, we're small.

And we had the same thought.

It's like, well, when it falls over we'll figure something else out you know because we're still in like beta.

It's a reverse and it hasn't fallen over.

And we're not slow anymore.

It's a good point.

And it's a testament to just how performance and reliable EFS is related to this.

Like we were running airflow in airflow expects to have a shared file system and well airflow started falling over and guess whose fault it was.

It was CSS and we just our CSS file system wasn't big enough.

But here's the hero story of this all.

Like we could have spent, hundreds of hours on our customer's tab to fix this using some more elegant solution like like engineered solution or for $300 extra a month.

We could bump up the AI ops on that file system and the problems just disappeared.

And that was like a big deal.

That's what we did.

Another thing you can do is you can dda bunch of garbage into the system just make them some bigger, which gives you more credits.

So it still keeps it burn stable.

Yeah but you get more credits because your system's bigger.

Yeah pay for a little bit more storage and you pay for more storage but you're paying less for the more storage than you're paying for the provision Diop.

Oh, really.

So that's Yeah.

So you just did it.

Yeah So you just did a terabyte of garbage into your EFS and you get a bunch of credits.

All right, everyone you heard it here first.

The same is true for art yes.

By the way.

Yeah, it's cheaper to just make a bigger regular GPU to already assistance and get the three eye apps per gig than it is to max out the 50 i.e. apps provision i.e. apps per gig.

Yeah, it is the whole deal.

First of all, it's still.

Yes it's cheaper.

Yeah, that's all I'm saying.

The same goes for IBS warming too.

Food for thought.

Yeah Yeah.

That is true.

Yeah, I remember CBS warming while I remember AI remember having to have a script that would run to just cat dev you know HDI or whatever to do to warm that up.

Be the good old days.

Cool Etsy or so.

Interesting thing is that today it was just announced that Dockers sold off their enterprise portion of their business to mantis mantis if you don't know, is that they're like one of the preeminent professional services company for cloud technologies.

They got their start on like managed installations of OpenStack and have branched out to companies and everything else.

It's somebody said kind of trolling that you know, who even uses enterprise.

But Docker and I think there's a point there.

Yeah but the more interesting is like, wait.

This is kind of scary to me because like Docker doesn't really even have a business model, right now.

And you know they're running out of runway with their venture capital.

And then they've sold the enterprise business.

Granted that wasn't generating a lot of money.

But like what's the plan.

Now like the one the one revenue generator they had is now gone.

So does this mean, I don't need to log in to download Docker anymore.

Are they going to get rid of that.

I like when Docker Hub goes away.

I mean, if they go where you know what will happen.

I thought.

Honestly, I think Docker Hub is the equivalent of something Docker and Dr. hub is to us in our industry of like you know DevOps is the equivalent of YouTube for social media right.

It's so critical to the entire industry.

But too expensive for a business that depends on it to exist and survive too big to fail and too big to fail.

So I like Dr. Docker Hub has to be acquired by a company like Microsoft that can afford a loss leader to keep up.

I think Hunter s Thompson phrase too weird to live too.

I can't remember exactly the phrase from fear and Loathing in Las Vegas.

Well, the too big to fail quote is from the 2008 all the banks you know all the banks getting bailed out.

Yeah Oh, yeah.

But yeah.

Yeah, there was a movie called fear and Loathing in Las Vegas about Hunter s Thompson and there's a line from it where he's talking about his lawyer and it goes there he goes.

One of God's creations too weird to live like too rare to die or something.

So doctor doctor.

I don't have a huge problem with it because you can put your own stuff up the Docker Hub.

And whatever.

What I have a huge problem with and I'm really glad that they're fixing it withheld 3 is the stable repository of film charts.

Oh, wait, no.

So what are they fixing.

3 with their deprecating that repo and moving everybody over to helm hub.

Yeah because that's like trying to get trying to get a pull request or something into that repo is like pulling teeth.

Oh, it's in class.

We gave up.

We long ago, we gave up.

Yeah So it's like that.

I don't mind Docker Hub second push my own thing at the Docker Hub whenever I want and people can use it.

But the health chart stable repository is what infuriated me.

But they held hub.

And I plead ignorance here.

I just used it for discovery.

But it's not like a chart aggregator is it like it doesn't cache the charts.

You can't.

No as an upstream chart.

No no it's just a place to find where people are publishing charts.

Which is cool.

And all, but one of the problems.

I am in one of the problems right is when you start depending on all these third party charts and things that they go away or they are deprecated and if you depend on it, and you don't localize it.

So I would like a hybrid like I can self manage it.

But they proxy it somehow kind of.

Well, I guess that I was going to say Terraform registry.

But they don't they proxy that they don't own it.

They don't care.

Where's pier.

I'll notify the feature request.

Yeah proxy proxy that own charge.

Not cool, actually.

Any updates to notify.

I added I read me.

Oh, very nice.

Thank you.

The to give.

If they weren't on the call last week.

This is really neat.

He'll notify her.

It's an open source project.

One of the suite IFRS members appear.

What's cool about it is you can look at any one of these repos any one of these helm charts.

And you can compare chart versions between each other.

And so here, the difference between any two chart versions on this chart.

Now the UI is kind of limited in what you can compare.

But the oil is totally unlimited.

So you can you can just hack the external and compare any two versions of an old chart to see what the differences are and that should de-risk your upgrades in the future.

So he's working on a feature that I requested that you can inspect just any individual version, like the hall chart as well.

I just posted a screenshot of you sent it to me earlier.

I don't think it's alive yet, but it's nice.

He also calls out like it this started out as just like some tool for him to use personally cause he was tired of having to figure things out.

And so like, yeah.

The UI being really rough was just it worked for him.

And so he didn't spend any more time on it.

Right but now it's an open source project.

You know we can help him with that.

Yeah notify or get hell no fire, whatever is what I do.

Yeah, you're welcome for the read me. yes.

Here it is.

I just.

I had one comment about the maraniss acquisition.

Yeah, Mark was up.

I was just reading the tech crunch article because this was brand new right.

Yeah, it just came out today.

So the list the list in the tech crunch article says with this deal Francis is acquiring Docker enterprise technology platform and all associated IP.

And then they list each one enterprise engine trusted registry unified control plane and CLI think it's the but I think it's the enterprise.

Clive for managing those Africans.

OK, I'm going to hope I'm going to hope that's right.

Yeah, I think so.

Clia itself is deal with this is CLI.

Everything else is like enterprise or trusted or you know this just as democracy alive.

That's a doctor CLI the is the CLI open source but it was ours.

No, but now you have a new company to go talk to you of your problem.

Yeah, right.

In related news also announced today is that way is open source.

So-called the premium Docker registry acquired by Cora less than a chorus was gobbled up by Red Hat.

And now they're open sourcing it.

True to their ethos I think kwe was also a chart registry right.

Crazy thing that I always loved was it came bill 10 with the static container analysis with Clare.

Yeah my kid you just got it for nothing like that was great.

Yeah So that's pretty cool.

Which is interesting because like we were just gearing up to deploy Harbour not a good story for Harvard by the way.

But Harbour is kind of like the alternative the open source alternative to Wade before Wade was open source Harbour uses Claire under the hood as well for container scanning.

What's cool about Harbour is you can use it as a pull through cash for dark Docker, which I wonder if kwe kwe supports will through cash as well.

Artifact those two artifacts he does.

But the commercial one you can't guess stores art factory doesn't do it.

I don't believe my company is currently looking at the commercial art factory.

It's not that expensive.

It's like what $3,000 a year from limited users.

I mean, it's not that much money.

Yeah, for the professional.

I thought it was something like $3,000 per user or something.

No, but just the Jay frog in the pricing.

When I reached out in the past seemed prohibitively expensive.

But if they're saying $3,000 a month artifact sorry 3,000 a year for Artifact free and limited user well, that seems totally true.

That's what I heard.

I mean, I'll go.

I'm not doing a little research.

But yeah.

There may be usage limits there.

I think I just remember hearing and stand up today about when we were using our usage of it was right.

No like no poll images was too high.

I'm sick of all of them.

Yeah Yeah, I got it.

I got you, bro.

Justin right there.

I love him.

Yeah is just the first link there and a wonder.

What was that.

So our factory going under sir.

Our factory pro.

So I'm looking it on for him because that's what we do.

Like we do all on prem because we're a government system integrator like you know people run away from cloud unless it's like AWS.

We've got lots of AWS stuff going on.

And Azure and whatever.

But all the different chat services, not so much.

But Yeah.

2,950 a year unlimited number of users.

I mean, that's what we're looking at dude.

But no freaking us three stories.

That's three.

Yeah, that's three.

No no it says no.

That says that's not included.

Well but there that strip.

Why not three right.

I mean, if you already have three what you need that for no, no, no.

Persistence of your artifacts.

I don't want to manage that UBS volumes.

Oh, I gotcha so I got I got it.

So I guess that means using EFS but still as three seems like a better option than EFS.

I gotcha gotcha gotcha.

So what we cared about was the universal support for universal support for all major packages because we want private NPM private helm private doctor that's like and private Maven and private and all that stuff like so will.

Well, we'll figure out guster FSA or whatever you know because it's 25,000 people like you know.

Lester I've never met any company ever.

That's been successful with that.

Yeah Every time I've used it.

It's been like this false promise.

And I regretted it.

OK Rebalancing well yes yes yes yes yes.

I'd be curious for this artifact because we used the previous companies.

But I wasn't involved with it.

The basic things.

You know you have to get off the enterprise of a workforce has high availability built in for a service like this.

It needs to be up like you can't deploy without it.

You can't build without assumption after that.

Yeah So I'm wondering like does that just mean, you have to know how to make a replica set of this Docker container.

And that's their high availability built in that they're charging $26,000 a year for it.

For what's that mean, I'm not optimistic if we take a look at Jenkins, the open source, making that AJ without enterprise is like impossible right.

It's by design.

They cripple it to not support running concurrent copies on a system unlike a plastics file system.

So OK, this is good.

So like without artifact Uri if you want.

So we've talked about what helm private helm private Docker, which sounds like Harbour or clay.

Yeah, it's quite.

This quite have helped.

It does.

OK So clay what about like private NPM for that.

Yeah So yeah, I don't have experience with the private APM.

OK And then like everything everything needs to have like Samuel or whatever SSL.

Good luck getting that with anything that cost less than like $30,000 a year ahead.

Well, there's nothing.

We have.

We have Samuel with Glavin at zero.

Yeah So lucky.

Yeah, I think Harvard did when we looked at last lasted had some kind of open source stuff will.

But any kind of like Sas product forget it.

That's what I meant.

Open source like Harvard's open source is open source.

Now like if we're not going to go with Artifactory because of all this stuff.

You guys you're talking about like the end of the you know, they're going to hamstring high availability to make you pay $30,000 a year, which for a big company.

I mean, you still get unlimited number of users. $30,000 a year is $1 a user for me right.

No I mean, that failed company you're at this is Yeah.

No brainer.

It's $1 a user, you also got to factor in the cost of engineering right a human effort, which is much greater.

Yeah, so I think it's well worth it with that consideration.

So I don't know.

It doesn't seem to me that harbors ports an mp supports SSA lot a lot.

There's AI forgot what the website is does anyone know there's a website that shames people for putting SSO behind paywalls like for you know at pro virgin or enterprise virgin or whatever like.

So Charles sir.

No like I said.

So as always.

And Alex you love this right the enterprise.

If you'd figure I get so pissed off at that.

Yeah, probably.

So while it's SSL dot tax though.

Check out SSA dot tax.

Oh, I love it.

OK that's us.

There's a giant list of companies who and it's got the percent increase of how much extra you have to pay to get SSL go down a little bit.

And there's a table.

I love it.

Yeah, I did a comparison somewhere.

Let's see.

Yeah Oh, this is great.

Oh so thank you for sharing that way with someone mind pulling me in on the unethical aspect of this.

SSL makes you more secure.

Now And it's like these people are trying to fleece you of more money just to be more secure.

My father did a lot of stealing tools that do this.

Like up above.

This table is like three or four paragraphs explaining why this is a problem.

Like Yeah.

Yeah And like Slack.

Right you know 6 6 667 user OK.

I can pallet that.

But now I want to enable SSL.

And look, we're a small company.

But I had people come and go.

I don't want to have to log in all the time and remove users.

It's like I just want to be with you sweet, sweet is free.

Basically right.

So why can't I just get that.

No, I need to double my costs double it using single sign on and look like I'm not an enterprise.

But I can benefit from it.

So it gives me high blood pressure too.

But when you go through the like the thought experiment.

All right.

What are all the ways we can like charge companies when our product is more valuable to them.

I get it.

Single sign on is a pretty good trigger indicator.

But it's but there are a lot of false positives.

And collateral damage for a smaller company as well century charging 200 percent like it is.

Some of them are ridiculous in 500% area table.

Wow is there zap here.

There's your there's your app.

But sat down at the bottom.

Yeah Yeah.

No, I saw.

So the only places I use single sign on are where it doesn't cost me $1 or more to use it.

Unfortunately, I just can't justify it.

That's like this is why.

OK, Eric runs a company.

He's got six employees or I don't know how many employees.

He has.

But he's got six.

He hires one more.

You know that.

He adds them to a dozen different services that Eric runs.

You know all over the place.

You know each one has its own user name and password.

And that employee decides to show up drunk one day and Eric needs to fire them.

And you know he needs to remember to go through every single service and take away their user account versus go into his Active Directory or whatever, go look, they're gone.

Yeah, exactly.

And they keep in mind that thousands and thousands of companies depend on what we have out there and we need to be secure about the companies fleece us.

I can't do it.

Anyway there is.

Thank you for sharing this this was great.

I'm going to use this refer to this company.

I read an article that they apparently someone got his cloud credentials.

And this guy deleted like 10 grand worth of servers yet it doesn't surprise hacker.

That's all I care about.

That was one of the best story.

Like this guy does amazing job telling the story.

The chief security officer thing at does it talk about how a shape shift got hacked.

This one here always does a great.

Listen first of all.

It's why why you need to use certificate based SSH but also why single sign on is critical and implicitly when I say single sign on what I mean is MFA or multi-factor authentication.

So that you know passwords are compromised somebody can't just keep your share those or log into your systems and delete $10,000 with the service.

So this was a blockchain company called shape shift and they did one of their engineers was corrupted offered a bunch of money to give up his keys and to some hacker the hacker used them compromised the systems.

They quickly traced it back to that engineer they fired him.

You know like how many thousands of big points were stolen something like that.

And then the hackers.

So they shut down everything they change the keys and like the next day or a few days later more bitcoins are stolen and they can't figure out what's going on.

And it turns out that basically they had installed a rootkit on somebody's laptop and every time they rotated the passwords.

They were just getting the latest password, and they were able to use that to keep compromising.

This isn't even their entire infrastructure.

I think in a separate cloud or in a new account, something like that.

And they were compromised again, because they didn't have multi-factor.

All right.

So we are basically at the end of the office hours for today.

Is I use duet to share my screen.

And now something's going to say, I can't see my mouse on that screen.

Oh, well.

So first office hours I've been to I keep meaning to get home and I get the email like I get the email and then read it like five hours later.

Yeah And also time zone can be complicated.

And now that we're going to have it on my work calendar as I'm busy during this time.

Like people don't see what my events are.

They just say I'm busy.

Do you work.

I assume you work remotely because I hear your house in the background.

My my messy office.

Yeah got to turn on my guy turn on my background when I'm talking to customers and stuff.

All I see is so Mike like where are you.

Are you on the call micro.

No you're not.

But one of my buddies who's in sweet ops he kicked the tires on a dubious control tower did not have many positive things to say.

So I was hoping to hear from him today, but he couldn't make it.

Well we'll leave that as a talking point for you next week.

Other than that looks like we covered everything except we're OK.

Get up actions I'll take that from next time.

More more fun experiments with other actions.

Anyways guys it's great discussion.

Always have a good time.

Thank you for showing up.

I'll see you guys next week same place same time.

Bye, guys.


SweetOps Newsletter – Issue #2


This past week we crossed 1,600 members! That's means we've grown by over 60% since July. We now span 57 timezones with over 600 DAU. An enormous amount of insightful information has been shared during this time. Thank you everyone for your contributions and generous support! Please keep them coming.

If you haven't yet signed up for our Slack team, join us!

Kubernetes News

Easily Import Secrets to Kubernetes
Easily Import Secrets to

Easily populate Kubernetes secrets from 1Password (and others). This operator fetches secrets from cloud services and injects them in Kubernetes. ContainerSolutions/externalsecret-operator

Kubernetes Development Environments
Kubernetes Development

Garden looks interesting! It automates the repetitive parts of your workflow to make developing for Kubernetes and cloud faster & easier.

Ship Kubernetes Event Stream to Sentry!
Ship Kubernetes Event Stream to Sentry!

Let's be honest. Errors and warnings in Kubernetes often go unnoticed by operators. Even when they are noticed, we might not realize with what frequency they occur and we lose the context of what else is going on in the cluster. With this tiny service deployed in your cluster, you'll get all errors and warnings loaded into Sentry where they will be cleanly presented and intelligently grouped. Plus, you can leverage all the typical Sentry features such as notifications and comments which can then be used to help operations and give developers additional visibility.

Terraform News

HashiCorp Forums are Live!
HashiCorp Forums are Live!

HashiCorp has finally launched their public support forums using Discourse. This is awesome stuff! Get help from the community for all major products like Terraform, Vault and Consul.

Export ClickOps to Terraform
Export ClickOps to
CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code – GoogleCloudPlatform/terraformer

Cloud Posse ECS Terraform Modules
Cloud Posse ECS Terraform

We've upgraded all of our ECS terraform modules to support Terraform 0.12 (HCL2). As part of this, we've implemented terratest to all the ECS modules so we can review your contributions quicker and provide greater stability!

Security News

Google Warns LastPass Users Were Exposed To ‘Last Password’ Credential Leak

Google Project Zero security researcher reveals that the LastPass password manager could, somewhat ironically, leak the last password you used to any website you visited. Ouch.

Sudo Flaw Lets Linux Users Run Commands As Root Even When They're Restricted
Sudo Flaw Lets Linux Users Run Commands As Root Even When They're

A vulnerability in Sudo, tracked as CVE-2019-14287, could allow Linux users to run commands as root user even when they're restricted. How can we still be finding bugs in sudo decades later?

If you’re not using SSH certificates you’re doing SSH wrong
If you’re not using SSH certificates you’re doing SSH

SSH has some pretty gnarly issues when it comes to usability, operability, and security. The good news is this is all easy to fix. SSH is ubiquitous. It’s the de-facto solution for remote administration of *nix systems. SSH certificate authentication makes SSH easier to use, easier to operate, and more secure.

(Pro tip: use teleport by Gravitational)

Kubernetes 'Billion Laughs' Vulnerability Is No Laughing Matter
Kubernetes ‘Billion Laughs' Vulnerability Is No Laughing
A new vulnerability has been discovered within the Kubernetes API. This flaw is centered around the parsing of YAML manifests by the Kubernetes API server. During this process the API server is open to potential Denial of Service (DoS) attacks. The issue (CVE-2019-11253 — which has yet to have any details fleshed out on the page) has been labeled a ‘Billion Laughs' attack because it targets the parsers to carry out the attack.

Want more? Check out our Slack archives to learn what our community is all about.


Are you looking for your next gig? Check out our #jobs channel in SweetOps for recent postings. Here are some recent ones that have been posted.

Brian Tai writes “AuditBoard is hiring a DevOps Engineer! AuditBoard is a fast-growing startup located in the Greater Los Angeles area. Our offices are located in El Segundo and Cerritos. Our SaaS product consists of a suite of solutions for internal auditors to improve and streamline their day-to-day work. (imagine a GitHub/Trello hybrid for auditors) We have signed and continue to sign many new customers including Walmart, Snap, Toyota, and many others in the Fortune 500. “

DevOps Engineer - AuditBoard
DevOps Engineer –
DevOps Engineer – AuditBoard

Amanda Heironimus posted, “PlayQ is looking for a Senior Cloud Services Engineer to join our team in Santa Monica, CA. As a foundational member of our DevOps team, you’d receive the perfect amount of support from our global team while enjoying plenty of room to grow and contribute to new and exciting projects.We empower our teams to produce meaningful and impactful work, so you’ll also have the unique opportunity to take the lead in shaping and informing our infrastructure, managing deployments, and ensuring that mission-critical systems are functioning effectively and consistently.”

Job Application for Senior Cloud Services Engineer at PlayQ
Job Application for Senior Cloud Services Engineer at

Free Weekly “Office Hours” with Cloud Posse

You are invited to our weekly “Lunch & Learn” meetings via Zoom every Wednesday at 11:30 am PST (GMT-8). Join us to talk shop! This is an informal gathering of 10-15 people, where you get to ask questions and watch demos.

Register here:

After registering, you will receive a confirmation email and invite containing information about joining the meeting.