Public “Office Hours” (2019-11-27)

Erik OstermanOffice Hours

1 min read

Here's the recording from our “Office Hours” session on 2019-11-27.

We hold public “Office Hours” every Wednesday at 11:30am PST to answer questions on all things DevOps/Terraform/Kubernetes/CICD related.

These “lunch & learn” style sessions are totally free and really just an opportunity to talk shop, ask questions and get answers.

Register here: cloudposse.com/office-hours

Basically, these sessions are an opportunity to get a free weekly consultation with Cloud Posse where you can literally “ask me anything” (AMA). Since we're all engineers, this also helps us better understand the challenges our users have so we can better focus on solving the real problems you have and address the problems/gaps in our tools.

Machine Generated Transcript

Let's get the show started.

Welcome to Office hours.

It's Thursday or November no Wednesday November 28, 2019 my name is Eric Osterman and I'll be leading the conversation.

I'm the CEO and founder of cloud coffee where a DevOps accelerator we help startups under infrastructure in record time by building it for you and then showing you the ropes.

For those of you who are new to the call the format of this call is very informal.

My goal is to get your questions answered.

Feel free to unleash yourself at any time if you want to jump in and participate.

Will we host these calls.

Every week we automatically post a video recording of the session to the office hours channel as well as follow up with an email.

So you can share with your team.

If you want to share something private just ask.

And we can temporarily temporarily suspend the recording.

With that said, let's kick this off.

I have a few talking points that came up this week that I'd like to bring to everyone's attention.

But we'll also get to answering your questions too.

So if there's ever a lull in the cold conversation we'll get into the new health file provider for Terraform but that moo moo Shu came up with.

And by reviewing his comments on that.

I discovered the Terraform shell provider, which is pretty rad.

It's the ultimate escape hatch to plugging into terraforming without being a Terraform developer like low level developer and always can wrap things up, get up actions.

If there's nothing else to cover.

All right.

I was just talking with Zach.

Zach is a new member of the community.

He's been helping submit a bunch of pictures on our packages.

Thank you for those.

And he was just sharing some of the stuff he was doing on some big data pipeline.

Stuff like that.

Zach hey.

Thank you.

It's a bunch of PR what a couple.

Sorry for the back and forth.

We've been struggling with that.

The packages repo is a little bit of a testbed for CCD for us in the testing I'll get action switching downloadable code credit and a bunch of other little experiments and stuff.

So we.

Yeah, we've had some issues with stability on the pipelines and the packages refilling impacted your ability to contribute there.

I did you a push to fix the latest thing you were working on right.

With the tools I did.

I did that.

Hopefully it will work.

Now OK.

Well, I haven't.

I mean, actually, I need to run the test match to see if that works.

Now Hey Dale welcome Maddie.

Motion cool.

So let's see.

So are any of you using help file today.

Yeah All right.

Cool cool.

And then I take you guys are also using Terraform right.

So then this is kind of exciting.

I don't know if you saw the news.

But I've been I've been bugging membership for like the better part of last year that we need to Terraform provider.

So that we can integrate it with the full cycle of everything else going on.

And I'm pretty excited that you came out with this.

The benefit of using this is that you can now pass values like this and have interpretations or reference outputs of other modules.

So it will streamline your internal automation of bringing up a cluster from scratch that depends on integration touch points provisioned by other Terraform modules.

What's available.

What was that.

This is available now.

Yeah does this available now.

He's not.

I just opened up an issue here.

Published binary.

He's pretty good about that.

He'll he'll probably do that in the next day or so.

But right now, there's no published binary.

So you gotta go get it yourself and build and install that.

And since it's not an official provider you know you need to install them in the plugins directory like this.

That said, I'm thinking of distributing a package for it in our alpine repository.

And then I believe I haven't tried this actually, I believe we can then set this environment variable.

So then you can have a shared path on the file system where you have all those plugins if anyone can correct me on this.

I'll be great.

If that's not the case.

All right.

I have a chance to test out the next few weeks.

I've been able to tie-in my Terraform with the home piles that I've been set up for my clients.

So yeah Yeah.

So he is well we're testing it this week Andre on my team is working on that right now.

We're going to be using it together with temporary cloud.

Well, one of the things he does is he credits.

This provider.

Dude I've been looking for one like this.

There's a bunch of them out there.

There's like Terraform provider external or something by someone else.

But it hasn't been updated in like two years.

So here's some here's a provider that is being maintained it commits as of just a few days ago here.

So that's cool.

And if you look at what this provider does it exposes the underlying lifecycle hooks in Terraform so create read all the CRUD create, read, update, and delete.

And that's cool because now you can just tie that it like if you wanted to.

Now use Terraform around the cops provider look the cops provided the cops are quite cool.

You can now do that just by scripting that here.

Sounds like something maybe you'd be interested in Dale I don't know.

Certainly Pablo welcome.

So I submitted a PR I think a week or two ago in the user end of it.

I am.

Go shoot.

OK, let me see here.

So take our thoughts.

I'm trying to leverage the tag in the attributes to do a mock UPS to play through all the scenarios I saw in the documentation where teams were actually tagged like members of the team to ratchet tag the resources where all the time, and you could actually interact with the resources that your tag for.

So like you can sort of step like this like, oh, I rolled by brightening type scenario with resources, which is called a bar.

Yeah, no well I'm familiar with a not conceptually, but yes yes.

So I was watching all of the talks from I think it was an old event, and they were walking through using attribute based arms control control access and the channel you walk through with a scenario where different teams were tied like a zombie and unicorn and what resources that they could actually terminate or actually interact with as well.

So you wouldn't have team members that truly bring those tools with resources, that kind of thing and all that.

Did you say you had a PR for that or.

Yeah, I think it's awesome PR.

Well, it wouldn't be coming from me directly would come from the lunar ops.

Gotcha I thought you said it.

I am a user, I think it was.

Yeah Yeah, I didn't see the open PR there.

If so I'll have or unless we maybe merger.

I'll take it again.

Oh, yeah.

This was already merged.

OK Yeah.

Thank you.

Thank you for that.

Andre reviewed it and used the go to guy on my team for Terraform stuff.

So the fact that he didn't tear it up.

That's a good sign.

Perfect Yeah.

Let's hear a release tag.

Yeah Any cutter releases zeroed out to the zero period has your changes.

Thank you.

Thank you.

Thank you.

Cool Looks like Pablo dropped off Nadia are you just ordering today.

Any questions.

Yeah, no questions.

Just following along.

I don't.

We're not using Cuba that is right now we're using efca.

Or just kind of playing around with it for the most part.

OK though with Terraform.

Yeah, we're using or I say I'm using your guide modules.

Oh, awesome.

Good to know.

Good to know.

Yeah using the we've updated them all these yes modules.

Now support each CO2 for Terraform zeroed out as well.

Are you on the latest.

I have.

I have not integrated your changes in yet.

OK I actually upgrade them on my own.

Like I don't know a month or two ago.

Yeah, it's not used in production or anything right now at all.

I'll update yours again.

Before long we've had a terror test too.

So they're all tested every commit.

It's also Yeah, you guys are curious about terror tests and haven't really dabbled with it.

It is pretty sweet.

I've been pushing it off for a while to be honest.

And I want to just use bats or something simple like that.

But with terror test it does.

It does work really well.

So we have a source directory.

The test slash source is where we put the example for Terry test.

We named are our convention is we name the file the test name.

After the example, if you go under our examples here.

And then complete here's the one that we're testing and then fixtures here's what we test our modules with.

So what's the thing that we do.

That's nice is we distribute and make file here, which makes it easy to use.

Go in the current working directory without having setup the whole scaffolding of like the directory structure for go.

We just hack it together with some simple links here.

So you can just run make test and it just works because it's done on the c.I. or like a pre-commitment.

We don't use many pre commit hooks right now mostly because like I've been like they're unenforceable unless you actually add them to the CIA.

But I I'm after Andrew's demo last week.

I'm more curious about it.

Specifically I want to add a pre commit GitHub action to a lot of our projects.

And then add that there.

So yeah, it's in the backlog.

I do want to do that.

But no not right now.

So yeah period for us.

We run it right now.

And a code fresh pipeline test that gamble.

Basically Yeah, we just we call on the repo.

We initialize the build harness and our test harness.

These are built harnesses what we use everywhere.

We just standardize how we interact with our projects.

And then there's.

And then here we run the tests in parallel, we.

We do the lifting the.

We run the that that's automation tests.

Oh no here here are the bats automation test that is short for the bash.

Automated testing system.

I think so we have a bunch of stock tests that we distribute in our test harness for that.

And then we have the terror test step right here where we just call make seat for this test source directory nothing to look at.

But yeah and I think you think or what was that specific do anything specific to them apart from a test or was it.

Tony Jeremy.

Yeah, I didn't even open it up.

Let me.

So we're not doing like variable.

So look, there's like perennial principal 80-20 right.

You can span.

You can get the maximum amount of benefit by it maybe doing 80% of the work.

But to get that the last 20% will take 80% more effort.

So we do we do the minimum in our experience, just bringing up a module and bringing it down again catches 99 percent of all the issues you're going to have testing the functionality of what you've done.

Yes that's valuable.

But it's going to take you 80% or effort to implement.

So in a lot of our tests we're doing kind of the minimum effort where we instantiate the module with its example that we publish using our fixtures and then we and then we destroy it afterwards.

We evaluate the outputs of that module to ensure that they match what we expected.

But we're not doing really elaborate testing of functionality.

The exception to that is our CAS module our UK JMS module, we're doing a lot more on that.

Andre spent some more time to ensure that we wait until loops until the nodes join, and that everything is health healthy.

What's so cool about terror tests being in go.

As you can leverage the whole ecosystem of go modules for that.

So Andre implemented a event Handler here to intercept when the notes join.

So one of the most common problems.

People were reporting to us were they were having problems with nodes.

We're not joining the cluster.

So by implementing this test.

Here we ensure that no, no change breaks that ability for notes to join the cluster.

And we're just using the built in support here for not built in.

But we're using them.

I forget that it was Andre wrote.

I'm just riffing here.

Yeah, so he's using all the k the Kubernetes go libraries to make that interaction easier.

That's my dog.

Mean his spoken.

Yeah, this is if this if the turtle stuff is interesting.

Let me know not in the office hours channel and I can have Andre prepare something for future session that we just.

Yes Yes it's cool.

All right.

I'll make a note of that.

So with things like terror tests like where do you draw the line.

Let me on my dog's bone.

I have to get my dog's bone.

It's driving me crazy.

Once it is there early you must be going through it just right back.

Where do you draw the line on.

I'm testing whether or not like basically the provider and Terraform works vs. whether my business logic works like I find that testing infrastructure.

A lot of times you just retesting stuff that should already be upstream testing.

Yeah, exactly.

You don't want to test.

You don't want to like all the Terraform providers had extensive testing.

So you don't need to test that kind of stuff.

So it's more.

That's why I think testing the value about puts is very valuable.

And then going back to the parade of principle on applied to testing right.

If you're going to spend 80% more effort trying to test for it is it worth it for you right now to do that 199 percent of the problems are caught just by bringing something up and down.

That's how I draw the line.

Basically, if we're going to have to spend three days developing a test for this, then we're just not right now because we got too much.

So like business logic.

Let's say Yeah.

I mean, it's a difficult decision like let's say you have some business logic to deploy in $3 with a citizen and you have some cause rules that you need there.

You could probably test that free easily.

And that the course rules are effective for you by just simply provisioning that module.

And then initiating and HTTP get against that a resource and checking that the headers are good.

And that would not be a big project that would be just like, an hour's worth of effort.

Yeah, it makes sense as just drawing those lines as often, especially I've noticed for you know more junior developers who are just getting into integration testing.

I see that struggle with that a lot.

Yeah, you end up with a lot of tests that just duplicate upstream effort or don't provide any value for the time spent on it.

Yeah, I. Yeah, I can't say anything that doesn't sound just kind of obvious to you or I.

The test is the test when I say test here.

The thought process is really test what is absolutely unique to what you're implementing.

But not that is unique to that just the underlying primitive or that resource.

So the fact that you need a cause record for a certain kind of request that's unique to your business that that's bona fide for testing the fact that the bucket is created to that you don't need to test the bucket was created.

Bucket was created of Terraform says it was created.

Exactly where this proved incredibly useful was when we first started porting our modules to 0 to 12.

We've been I think we got, like we got like 50 60% of our modules, and all the most important modules are now zero about 12.

It's just like some little remnants here there that are not.

But when we ported our no label module over there were just it was a pretty complicated module in terms of all the replacements and coalescing and formatting and stuff that we were doing with it.

So having tests on the outputs for that to make sure that we didn't introduce a regression was critical and it actually helped me catch a bunch of things along the way.

So here's an example.

We're just testing that the module is producing the kinds of outputs that we would have expected, given the kinds of inputs that we're giving it.

And those inputs are in the fixtures cool.

Any other interesting announcements you guys have seen.

Cool projects lately.

Bookmarked Yeah.

We'll make a run for it.

I don't really want to hear more.

But was it real.

I. My rancher.

Yeah, it's like I'd get up.

Solution is not recommended for production just yet, but it looks fascinating.

Micro pass.

I didn't miss this.

I didn't see this.

Just keep clicking in the wrong places.

Let's see.

So Rio is a micro past that can be layered on top of any standard copper needs cluster consisting of a few custom resources and apply to enhance the user experience.

Users can easily deploy services to communities and automatically get continuous delivery.

DNS HP is routing monitoring auto scan canary deployments get triggered build.

Oh, very cool.

So if this is a pretty minimally deployed like my problem with a lot of these passes is you've got to deploy so much infrastructure just to support the pass.

It doesn't make me excited.

So I'm wondering how many I'd love to see an architecture diagram for this.

OK So it's built on Linkerd.

Prometheus tacked on.

Let's encrypt.

OK That's very cool.

Yeah, I do want to be a fun little PSC too.

I get I'm just going to share this to the officers general.

So related to this.

It reminds me of something else.

I saw.

It's by we've plucked not flux flux has gotten a lot of attention lately because of the collaboration now between Argo and flux and is it whose backing it is.

Red hats.

That's backing it or I'm Irish where there's somebody spearheading this or is my understanding.

But no it works Flagler.

Have you guys seen this.

Whoops Yeah, I clicked on the wrong one.

So everyone talks about doing cannery rollouts and blue green rollouts.

But when you get to the science fine doing it.

There's a little bit more to it.

And the right way to do it, of course, is a staggered rollout which is integrated into your monitoring system.

So it can automatically progress based on your monitoring solution or your metrics.

And that's basically what Flagler is doing.

So it's a is it a controller for basically that process.

So as you see it ties into the Cuban data APIs to see the progress of your rollouts Prometheus for your metrics and then it can progressively increase the traffic to your services.

And that's cooler this year as well.

I was looking at this one as well.

Got a bunch of things that is interesting is if you do actually kick kick the tires on it or do it being a serious thing do let me know.

Be pretty cool.

Also Also this goes for anyone here.

If you guys are working on anything neat.

And you want to do like a show and tell them we're really cool.

So Flagler and Rio All right.

It sure flag officers here.

Have you guys heard of a man.

Yes Yes.

Yeah, I don't know how realistic that is.

As far as using it in a W because it's supposed to be almost the drop in replacement for Docker.

Yeah, I mean.

OK, I guess it really depends on what perspective you approach this stuff from.

Man who comes up time and time again, when you're looking at.

How How do you manage CI/CD providers allow building a Docker containers without giving away the crown jewels.

So basically, they wrapped around pod man for doing those Docker builds in a secure way.

But Yeah, I haven't tackled it.

Not my kind of the level of say, there are all these micro optimizations just like this that are really awesome.

And I think are maybe worth pursuing, especially if it's just like for your closed ecosystem within your company.

But we have club costs of what we're really trying to do is provide the roadmap for integration of all of these components and systems.

And as much as possible.

We wanted to first thought leadership to tools and technologies that we use.

So for example, for Kubernetes today we've been mostly using cops.

We do have the guest here for modules as well.

But I bring this up.

Because if cops decide they want to use pod man that's awesome.

We'll support it.

But if cops doesn't support it out of the box.

We're not going to kind of break compatibility with them just to support it.

And introduce just more and more stuff to manage our earth.

Are you guys doing a lot of prayer.

Are any of you running highly threaded apps on cougar niches right now.

And using Prometheus.

Yeah not yet.

I actually do want to reintroduce using Prometheus within or augers or kind of to cleanup or deployment or natural deployment pipeline as well.

And do actual cooking area with it.

But nothing yet.

There are a few products coming up for next year that may actually require that, but not at present.

OK Yeah.

What do you ask.

So then there's recently to fix that has been merged into the alpha channel of coral as it relates to basically how you throttling is implemented in the Linux kernel and a big problem in how you can monitor for it on under Kubernetes, and the problem, is that you get a lot of basically erroneous CPU throttling alerts.

If you're right running highly threaded apps under communities and just spreading the word on this fix.

That's the core of us alpha channel right now article unfortunately, it doesn't look like it's going to hit cops like or Debian cops for Jadeveon anytime soon.

Welcome to Ms Wilson garden I'm always alert.

What was it.

I have a request.

All right next year.

I'm hoping you actually do another session again on geodesic and your actual workflow for a bit more of, I guess, a debate.

I like what you actually showed us.

The last time I took my effort to perform it greatly.

I think that there's a lot more that you guys actually do, which I think is really cool process that we just want.

I want to peek behind the curtain.

Pretty much.

Yeah.

No I'd be right.

They'd be pretty cool.

Also something even like Alex might be also interested in showing his work clothes and stuff.

He's using geodesic weekly, if not daily workflows are efficient.

I'm sure you guys have a lot more tricks and tips.

But I'm happy to do an ad hoc thing.

Yeah also there's a great one.

I should get Jeremy on the line for that one.

Jeremy has taken it to the n-th degree.

What he does inside of geodesic stuff that stuff that I haven't even kick the tires on the journey he's been doing inside of geodesic.

Basically there's a bunch of extensions that he's added.

So you can add customizations for your experience and you'd like that don't necessarily affect other developers.

This is kind of empty and antithetical to our original plan, which was that everyone should have identity in identical environment.

But the reality was that different developers have different shortcuts different aliases different ways of working that I gave up trying to fight for the one.

The distinction there there's the tools and like that portion of the environment should all be the same.

Yeah, but if somebody uses them and somebody uses emacs like, who cares right.

I agree.

I think that's the distinction like that.

There's a line there where it doesn't matter anymore.

Yeah So actually this remind just jog my memory.

It's a total non sequitur here.

One of the things that we've been seeing a lot more of a lot more problems are with is base and you've been experiencing this Alex as well is basically fluently directly integrated with Elasticsearch an overwhelming Elasticsearch Elasticsearch is just very dainty sensitive finicky search product.

It doesn't like eating too fast or it will just complain.

So a couple interesting things there.

We haven't implemented this yet, but I was just doing some research and bringing this up because one of the announcements was just back in October.

So this is pretty new.

So I think the better architecture for us to implement on this is going to be one using the fluid fluid to Kansas plug-in and that that's existed for a while.

So using Kansas as a buffer until I ask search and or Jacqui rather I don't even have an Elasticsearch.

Eat Yeah.

Exactly controlling the controlling the ingestion.

So that's one thing.

But once it's in Genesis a can buffer it.

Like you said B you can send it to multiple destinations or sinks one of those sinks can be s three.

So now you can have long term storage 3, which can be queried with what is it Athena or something.

And then you can also have your real time search with Elasticsearch, and a shorter retention period because you only need maybe the last week or something like that.

So this was the one click again here for it.

This is for fluid deeds can ease this.

But then can eases and it's already supported with Terraform is to shift the logs to Elasticsearch.

So an Elasticsearch destination.

So really does look easy enough right.

And then related to that it was just the announcement Yeah, I've used it on over the years and firehose outside of the thing outside of Kubernetes to deliver logs to plastic surgery.

It works fine.

I use it.

I used the last year from a bunch of Windows boxes using the ADA as agent logs agent works great.

That's cool.

Yeah, I mean, I think this could have been implemented outside of this.

Like you said I guess it's just they just made it easier, more turnkey now.

Yeah, I used to have to back when I was doing it last year used to have to munch it through a lambda.

You can see pieces for the lambda to munch the logs into whatever format you needed for a massive search.

You do all that that Make sense.

Cool Any other cool projects you guys came across recently.

I'm going to go head over to my own stars.

See what I found.

If there's anything interesting.

Alex anything.

Sorry I missed that I was responding in Slack.

What's a what was a question.

Oh, OK.

No, I come across any cool little projects.

I mean, I have a lot of cool little projects in my head.

I actually play with any of them.

Yeah Prometheus was as you know, was bought for a couple of weeks.

And I finally got it fixed up and found out there's a whole bunch of alerts and problems that have been going on.

This sub describes them through those as a little fragility there.

I've noticed with like Prometheus can still collect metrics even if it can't display them or you can't use them because that portion of the stack isn't working.

But that doesn't necessarily cause the watchdog that we set up an alert manager to go off because metrics are still coming in.

And so you don't actually know that stuff is broken until every user reports it like refining slumlords refinement of this dashboard.

But our manager also is having the same problems.

But like your watchdog member fails.

So you don't know says one of the things I'm just here for us.

We just we're spread thin.

So we don't have folks that are focusing on that daily.

So we just didn't notice it for like a week.

Yeah Yeah.

And then when you need it.

It's actually down.

That's frustrating.

Yeah Yeah.

Those things we can roll out and in our next engagement will.

Yeah every engagement.

We do with the customer.

We basically extend the alerts.

We extend the things we monitor.

So that might be one of those areas.

It's looking like it's going to be gas.

So that's where we're probably going to be extending our support for it seems pretty as it's maturing as a product.

It seems pretty popular and now we manage no bulls.

That's pretty awesome.

However, with yet with a spot in the ocean, which we briefly discussed last week.

Spot in the ocean is basically manage no pools for Kubernetes by spots.

And it also handles the complex calculus of scheduling the right kinds of nodes, which you can so that begs the question, is it worth a 20% of charge from Castro's cast to a decent enough managed no pool solution that spot in like where it whereas their value prop at that point.

Yeah Yeah.

So I guess the answer is it depends.

And what you can do.

Well, so first of all chaos manage nodes don't support no tools at all today.

So it was just released whereas a week ago.

So I guess that's probably coming very soon.

But it.

What I mean by it depends is it depends on your workloads and how consistent they are basically what kinds of instances or nodes you need.

Will you get by with a very with a naive kind of spot fleet configuration or do you need something more complicated.

What I what I think is so cool with respondents thing is how it sees how much memory you need.

And then it finds the right kind of instance, for the cheapest price that satisfies that requirement.

And I think that's really just very cool.

So it's not just it's not it's not just using spot.

It's also right sizing based on the instance tight.

And for that, you need a scheduler encumbrances and for that Amazon needs to develop a scheduler for Kubernetes that could do that.

Yeah, it seems pretty slick.

I want to look at it more when I have any kind of time.

Also let's see here.

So I think I haven't.

So it's been working on this.

And she just implemented it gets weird.

You a search and repo names.

That's what you meant to do.

Yeah, this is going to see since I didn't work on.

I don't remember.

Yeah, this here.

So I should publish this to the registry.

But so so we just developed this module here, which extracts the launch data configuration for cops so that you can use that together with our home file for spot insert to set the parameters that are needed.

So there's a bunch of parameters for spotting the ocean that come from your Kubernetes cluster and in order to automate that end to end.

I guess we haven't merged yet.

Since it's a work in progress.

Yeah, here.

So that other module is to get close to respond.

I don't know.

It's a demo for you for them.

I don't know where those launch I would have expected to see the launch configurations in here somewhere.

Outrageously badly beaten.

The caps stuff somewhere.

But either way.

So we actually cut bait on that.

So spot supports cops natively so to natively so to say like they have an explicit documentation site for setting up cops or tried that.

And he was.

Get it.

Cops that control the cops controller was caught something.

So we couldn't figure it out.

And even if you implemented that there were large parts of it that were not fully automated that were click ops e click ish to it.

So that's why we implemented.

So spot instead has a Terraform provider.

For ocean.

And that's what we implemented instead.

So Yeah, that makes sense for the pickups but she made us.

Yeah So this is what we implemented.

Yeah, this is I guess just a teaser.

Igor is going to present on the ThoughtWorks technology radar probably next week.

We knew that today was going to be pretty quiet given that Thanksgiving week.

So we're going to talk about that.

And then we'll probably do a better presentation on spots as well.

Sure thing.

Had a thing like it have actions have you.

Did I miss that I came in late.

This is just a place holder.

Yes, I can talk about ghetto actions all day long because I've been doing a lot with them.

And I think there are a lot of fun.

Yeah, I just I just need to actually write my first one to understand how it works Yeah, I think it's a great place to start would be like cloud costly packages under GitHub and here.

Here I implement three actions that we're rolling out for us.

Here are the actions that are workflows.

Here.

Here are the actions we're implementing for all of our repositories across the board.

For now these are related a little bit to the fact that we do a lot of open source matters.

Some of this for us.

But one thing is auto assigned.

I really dig this PR.

So on it it references a another.

Basically, you can build a library very effectively for your company with all of your actions and then reference them very tersely just like this.

We use them.

So this year will auto assign the PR when it's opened this year will automatically greet new users who open an issue or pull request with a comment.

Inviting them to join our slack community.

This year is the one I'm getting the most benefit out of.

Like all the other ones are nice, but this is the best.

I think it's the auto label or immediate payoff.

It's great because when you go to pull requests and you can look at what we've had going on here.

So every night at midnight GMT we have a process that kicks off.

That looks to see if there are any new packages that need to be updating.

This is maybe something free for use act that drew you in as pretty cool.

So because it's a model repo are packages the labels make it very clear to see yes, we updated packages.

But what was updated.

OK So here we updated cops and teleport.

Here we updated helmet file on HSOPS.

Here we updated code fresh panda dog.

And here.

It's get leaks and help file.

If there's just a coincidence that it's 2 per day really weird phenomena.

Other times it could be dozens of packages like this one thing that would really help me is if you did the same thing for like your home files repository like I want to.

Yes, I guess I totally want to do that.

I could just pin them like when I want to use the helm file.

I see a master.

I could just pin it to the newest release.

But then I lose that context.

Why am I upgrading that specific file out of your repository.

So I prefer to put it to the newest release of that helm file what actually finding that number is kind of a pain in butt.

I mean, I think you can eventually get down to a comet.

Last I see what you're saying.

What is the latest release of related to that writer.

Yeah Yeah Yeah.

Well, as always for the lazy release that contains the newest Prometheus operator or the first release that contains the newest operator.

I should say.

So if your auto label him when like a release is made.

Hey these are the.

Because you're trying to have a human process.

Remember to put like something in brackets right.

Yeah, just click a label that says Prometheus operator and see all the times or release included changes to Prometheus operate.

That allows me to have a lot more quickly go through and say, OK, this is upgraded for this.

And this has the bug fix.

I need.

That's going to there.

Yeah, that's really good.

That's a good tip.

The only reason it's not here is I haven't gotten around to rolling it out yet, but I'm going to roll that out.

That's a good tip.

That's one of those things where I guess you just have to kind of do something where you pass the file names in the changed files and make those into labels somehow something.

Yeah So we've generalized the pattern because we have.

But while we practice poly ripple across the board.

We do have a few minor repos for sure.

So we kind of generalize the pattern for that under packages you'll see how we're doing it in the makefile.

What we do is we have a target here named after the file.

One thing that's often forgotten these days because make is used so much for phony targets targets that don't correspond to files is that makes origin is actually for file manipulation or like using file modification times to trigger build processes.

So here, this is saying if this file does it doesn't exist, then I go ahead and generate it now.

So that's a yes.

This is generating the labels for that by running this target here.

And we're just the extracting we iterate over all of the packages defined here.

One thing I know I just learned.

And make is how you can define locally scoped environment variables like this.

So I would say that's the power to make a trip.

Trick and then iterating over those and stripping the slash and putting a star star.

So the outcome of running that target.

OK So then and then in here other read me dex we just trigger that as a dependency every time we make it.

And under GitHub auto label here.

You can see the ones that we generated them from that process.

So I'm just going to do co-generation like that for old files like you suggest they don't look like one that's there.

Yeah So a public service announcement, then if you were relying on our packages to be stable because they weren't updated very frequently being a false promise of stability that's no longer the case.

Now they are updated almost well within 24 hours of a new release that is true for everything except for these packages here where we're pinning to a specific major release or sorry specific major minor release these here are not yet updated automatically.

But everything else that doesn't have a version number like this is updated automatically.

So these are all updated automatically updated automatic with et cetera.

All right then let's see.

Any last thoughts here.

Just quick.

Go ahead.

Yeah, I was to ask about Terraform like 13 or how long you guys do that.

Does anybody talk about how long you Terraform what was going to be around or zero not 12.

I should say.

That's a good question.

Anybody have any Intel on that.

I don't know.

My understanding is any jump from zero told to zero 13 is going to be minor by comparison.

The 11 0 12 thing that's like got.

Yeah, that was painful that's right.

And the fashion being a major patch.

No, but Yeah.

But you know.

But September pre 1.0 is a different construct then.

Yeah Yeah.

Yeah So I know that because cloud policy we exploit that as well.

So most of our work is all pre 1.0, which means that the interface is subject to change.

And it's kind of hard to argue that our stuff can be not pre 1.0 when Terraform itself is not 1.0 yet, but you should know that you elicited kind of a cold sweat when you said zero dark 30.

No, it was not fun updating all of our modules.

So all right.

So all right, everyone.

Looks like we've reached the end of the hour here.

This about wraps things up.

Thanks again for sharing.

I always learn a lot from these calls.

It keeps me up on my tip toes.

A recording of this call is going to be posted in office hours, in a little bit.

And I'll see you guys next week same time, same place.

All right.

Thanks Thanks.

Author Details
Sorry! The Author has not filled his profile.
Author Details
CEO
Erik Osterman is a technical evangelist and insanely passionate DevOps guru with over a decade of hands-on experience architecting systems for AWS. After leading major cloud initiatives at CBS Interactive as the Director of Cloud Architecture, he founded Cloud Posse, a DevOps Accelerator that helps high-growth Startups and Fortune 500 Companies own their infrastructure in record time by building it together with customers and showing them the ropes.