Public “Office Hours” (2019-12-18)

Erik OstermanOffice Hours

Here's the recording from our DevOps “Office Hours” session on 2019-12-18.

We hold public “Office Hours” every Wednesday at 11:30am PST to answer questions on all things DevOps/Terraform/Kubernetes/CICD related.

These “lunch & learn” style sessions are totally free and really just an opportunity to talk shop, ask questions and get answers.

Register here: cloudposse.com/office-hours

Basically, these sessions are an opportunity to get a free weekly consultation with Cloud Posse where you can literally “ask me anything” (AMA). Since we're all engineers, this also helps us better understand the challenges our users have so we can better focus on solving the real problems you have and address the problems/gaps in our tools.

Machine Generated Transcript

That's me.

That's right.

You know these days there was pride is very private.

Conservatives want to really make it clear you know what's going on.

All right, everyone.

Let's get this show started.

Welcome to Office hours today.

It's December 2019 and my name is Eric Osterman.

I'll be leading the conversation.

I'm the CEO and founder of cloud posse where a DevOps accelerator we help companies own their infrastructure in record time by building it for you and showing you the ropes.

For those of you new to the call the format of this call is very informal.

My goal is to get your questions answered.

Feel free to unmute yourself at any time if you want to jump in and participate.

We host these calls every week will automatically post a video recording of this session to the office hours channel as well as follow up with an email.

So you can share it with your team.

I've been slacking on those emails lately though.

So being me if you don't get the message if you want to share something private just ask.

And we can temporarily suspend the recording.

With that said, let's kick this off.

I have prepared some talking points as usual.

These are things that have come up throughout the week.

We don't have to cover these things.

These are just if there are not enough questions to get answered today.

So first thing is that Martin on our team also in Germany there's somewhere I shared a pretty cool little module he's been working on, which is how you can deploy and bastion and AWS that has zero exposed ports to the outside world.

And you can you do this using the New Features of the SSL sessions manager for shell.

Also in other news, the engineers at headquarters in Moscow are rated kind of interesting drama in our industry to unfold 15 years after the company was started.

EU plans to deprecate helm based installations.

I'm really not liking that.

I hope this has not a trend.

This makes me think of the Java installer we've had to put up with for a couple of decades here.

And lastly, I want to cover something that we briefly touched on last week with the announcement that you know the official help chart repositories are going away.

What is a alternative pattern that you can use.

That doesn't mean you have to host your own chart registry.

And I want to talk about that too.

But before we cover this in more detail.

Anybody have any questions, we can get it beyond Terraform Kubernetes DevOps and general job search.

Anything interesting or an idea.

This is Project Adam cruise towards the end.

I'd like to talk a little bit about Paul request process and how to be able to get something of a bug fixes and changes contributed to some of the cloud policy.

OK Yeah, it sure will.

We can jump in and talk about that right away.

First thing.

Have you already opened something in your app.

You're frustrated that we are moving fast enough.

That happens too sometimes because we had a lot of freedom pull requests.

Yeah, I have.

I have to open right now.

And I just kind of care like what the expectations are.

For how long it should take to get reviewed or you know how long.

I should be patient and don't feel bashful pinging us in Slack to prioritize.

It's kind of like if we follow.

Sometimes the path of least resistance.

So if you nag us on the slack team.

I'll have Andre or myself or you or one of the guys look over your request and we'll try and expedite it.

So for starters can you post your PR to the office hours channel and I'll call on call attention to it.

Absolutely You can review that one with a one page deal.

If anybody afterwards wants to help me troubleshoot us certain that I'm doing GitLab install and GitLab on a career that is cluster and cert managers give me a pain in the ass.

So I can send you guys some cookies if you want to talk about it later anyway.

OK I read an article yesterday regarding alphabets move towards trying to be number one or number two in the code space.

Yeah with their timeline.

I seem to actually have a response that they may end up just putting it all together.

I don't think it was good PR for Google.

Now on the iCloud product.

Yeah, it's great.

They want to be number one or number two.

But Google has a horrible track record of it.

And thank you for bringing this up.

This should totally be a talking point.

But yeah Google has a horrible track record as it comes to pulling out of products.

We were using Google higher for example, and less than a year after we signed up and got everything in there.

They pulled the plug on that like so many other things.

So I would not put it past them to just say, hey, we're exiting the cloud business.

Sorry we did become number one or two.

We only want to be number one or two.

That's your fault. You should have made us number one or two.

Yeah like to just pull out of GCP with them.

I don't know.

But I mean so.

So very distant.

I said I read the article or something you said like their cloud business maybe generate somewhere in the $8 billion dollar per year range.

When Amazon does that per quarter something like that.

I mean, yeah, there's a big gap.

But it's still.

I mean, does that make it not even worth it for them.

That's amazing.

Yeah because I see what I see where Google does some things very well and AWS does some things very well as well.

But it's still a part of the market.

And we can actually grow it was they can actually do other things even better than Amazon.

I think what Amazon has been doing really is like the Facebook model that be looking at trends to what people are doing.

And then build a service around it.

And then they push that out as a repeat each time.

Yeah, I would love to see the Amazon formula.

I bet they have something pretty well-defined internally about how they identify assess scope ballot build requirements prototype.

I mean, it's a machine inside of Amazon how they're pumping out these services.

Yeah, that's your point, I mean, I think a lot of certain Google services.

I'm envious of the UX is a lot better.

The performance is a lot faster.

I mean GKE compared to cars.

I mean night and day difference in terms of usability and power and updates.

Yeah, I must say even like after filing a couple GTM b bucks that was super quick to fix them like I was using Google functions on the product.

I fixed the back line.

I don't know two weeks or something that wasn't really even important.

So it's like this relationship between it was not even a big company that was working on a school project like this that's I fix it in two weeks.

And I just feel like the creation ship is a lot better between Google and its customers, even though a lot of people say Google's sucks its customer advancement.

I think they've I think they've turned a corner on that.

I mean, Google is G Suite user and a Google Fi customer.

Their mobile offering here in the US.

And it's the best experience of any product that I have.

Also they're Google Wi-Fi.

I use that as well.

And that's been a phenomenal experience working with their customer support.

I talked to somebody within about five minutes of calling up their technical.

They know what they're talking about just not having to navigate 25 menu options sit on board for 45, 50 minutes.

That's what's great.

Yeah, I had that experience just recently with a guy at code fresh.

And I talked to a good experience.

It sounds like.

Yeah, you know, the get set.

And you know you get you get shown a demo and you know sometimes it's somebody who's just been trained how to do the demo and you try to deviate at all.

And they're like, I don't know.

I don't know what I'm doing.

But this guy was like, you know, all kinds of stuff you know.

You know and showed me.

I was asking all kinds of questions.

And he was going around and showing me different things.

And we were totally off script and I really appreciated that.

You know this week, we had a technical guy in front of me that it's very cool.

Do you know who it was by any chance.

I do come back to that.

Oh, yeah.

I'm glad those are a positive.

This is place I can.

If anybody is interested.

I can stick to how things happen at Google internally.

Having worked there for four years.

They're extremely good at doing stuff in scale.

In other words, things that they do in scale.

They take very seriously.

So if what you have if you need them to do something that might be somehow personal or unique to you you're screwed.

Just never going to happen.

Yeah, but if you need.

If you need something that is, in a sense part of their existing workflow you know they stay up late at night trying to make that happen as quickly as possible because they want to reduce the friction.

Yeah Yeah.

So basically, if they see that this affects 10 million people, then they will go do it.

Yes And I'll talk a little bit about what they can get away with that.

The other thing is that they have two different.

There's basically like two different categories of services in terms of support in terms of internal IQ support or SRT support.

There's the stuff that's funded or I should say the stuff that has what they call a revenue stream.

And so there's a billing idea for it.

And those things, then get different tiers of support internally depending on how much they go for.

And then there's everything else.

So basically, if in their internal accounting, they don't have a revenue stream associated with a service.

It could disappear at any minute and they wouldn't give a rat's ass.

Nobody would try.

That's harsh.

It's very harsh.

Yeah, it's totally.

Yeah And the other thing is the way they can get away with it is they do a lot of blue green deployments.

And that's basically something that I would love to bring up in this group at some point, maybe not today, but how we can go about establishing quality gates and running canary mills.

So at any given time, there will be multiple versions of female running at once.

For example, and the versions might be like unbelievably similar to each other, but they then start to collect telemetry.

You know they compare the metrics of the new whatever.

Well, this is the feature flagging is for right like launch darkly or Flagler or flipped except well Yeah, except it's more about having multiple.

In other words feature flagging.

Yes except you're comparing the flagged instances, the flag pods with anon flag pods and they're all running simultaneously.

Right I was to the beauty with things like flag or is that that's flamer trees built in.

So you can see how that's performing for you.

And then you can tie that in with Prometheus and all your other reporting dashboards.

So I guess what you're monitoring or measuring is going to depend on the product, but Yeah, that because the tooling for that is there.

I guess there's no good demos for how you should do it.

Yeah And then I think the ultimate the ultimate secret.

I mean, it's not really a secret.

It's just that they have the discipline of making sure that whenever they design something they include the diagnostic criteria, as my guide.

So if you're writing a piece of software.

And it's going to do something, then something in the world should change as a result of that piece of software.

That's a good point.

Viner you have to say this is going to affect these metrics.

And these are the thresholds that are considered good.

And then so like zero might be bad.

For example, let's say you want to tell people in the spec 0 is bad because then that automatically gets plugged into all the things that could be bad.

Yeah, I totally agree in theory with that.

I think few teams have the rigor to be able to actually carry that out.

And the and the foundation to truly report on that, well ability you're saying I like the idea that before you make a decision, you decide up front kind of what are some of your expectations that this is going to do.

And then by deploying it and made seeing that change happen.

Did it meet or exceed or not at all or did it meet or exceed your expectations or was something in kind of a similar vein that my team has been going through is that we started saying because we get people that say, hey, can you create this little app for us you know as like internal research and development type thing.

But you know we don't want you to spend a lot of time and money on it.

So don't do a pipeline don't do any tests just do it.

And so what we started saying was no if you don't want a pipeline if you don't want any tests.

It's not important enough for you.

We're not going to do it.

You so we have this new philosophy that you know everything.

It's pipeline.

Yeah nothing doesn't get a pipeline.

Pipeline driven development.

Yeah And it's so the this video that I watched from Uncle Bob.

Everybody know uncle Bob Martonne from La.

He's one of the founders of agile agile veterans.

He did a video and he started talking about how you can't go to your business and ask, can I write tests.

Can I build a pipeline.

Because what you're doing there is you're trying to shift the risk to them.

Yeah, they don't know that stuff you know, that's you.

You know that stuff.

It's your professional you know risk to take.

So he said they're just always going to say no, you know.

So you have to shift that mindset.

And you take on the risk of you know saying, I need to write to I will write tests you know as part of my professional ethics you know or whatever for this is the way I operate.

Yeah, that was really powerful.

That is powerful.

Thank you for sharing that.

I like that you have a link to the video shot and please share that.

Yeah, I will share it.

And just to add some context in context.

But add some reaction to what you're saying because we all know and I've been there.

How if you didn't do that and you go along with doing it.

Next thing you know, you get a notification.

So we're going live with the service next week.

And now it's your responsibility to manage the uptime yet none of the legwork was done to make sure that it was a reliable service from the get go.

So you called.

So Anybody have any.

As anybody first of all played around with the Systems Manager shall I. We haven't done it.

We use mostly teleport.

So I'm curious if anybody's using the systems in your shell what their experience has been.

I have not.

But I do want to hear a little bit more about before from when you use it well and managed by the authentication.

Gotcha Yeah.

We can certainly talk about that a little bit.

I have somewhere in my work a my if you to go to a climate policy startups.

So here, I think under screenshots here.

I have some screenshots of what the architecture looks like.

Portillo port, which is helpful because it is a legit like enterprise grade application in terms of complexity and architecture.

Not because they're trying to be difficult. But because it's solving a difficult problem.

So we just gotta let this load here for a second.

It's going to be a little slower since I'm logged in.

And this is not cached session.

In the meantime, for everyone else teleport is a product by a company called gravitational grab.

They actually just raised like 30 million.

So they're doing pretty well to go to GitHub dot coms large gravitational slash teleport.

It's a open source or open, core project.

Most of the functionality is in the open source version.

But things like single sign on with Oct or anything other than GitHub requires a license to do that.

So let's see this loaded is screenshots.

So teleport.

So yeah.

So just like the cool thing with teleport is you can replay sessions like with full on YouTube style replay so that's what you see right here.

This is a replay of the session.

But the architecture.

Here we go.

See this is big enough can you guys see this.

Maybe we need to make the bigger.

I think you were good before.

All right.

So there are these three components.

Basically the auto serve as the proxy service.

And then the node service.

So the node service is this service over here.

And what you do is you deploy on all of your servers.

You deploy this node service and that kind of replaces ss.

So technically it can work together with OpenSSH age or it can replace open SSH.

Then you have the proxy service.

So the proxy service here is deployed as you can see multiple times.

However, it works like a tiered architecture.

So basically, the proxy service is your modern day bastion in teleport so you expose one of your proxies like this, and you set that behind and you'll be with TLS and your ACM certs like tell that I can't 10x that I own in this case.

And that connects you.

So when you connect publicly either using the command line tool using Clyde tool called DSH or you can connect over the web like this, you're actually connecting to the proxy service here.

Then there's that.

So then you have say your clusters.

And you have multiple clusters.

And this would be an example of one cluster here.

But let's say you have five or 10 or whatever clusters all of these deploy the same stack of services that you have they have an auto service they have the proxy service.

And then they have the node service running on all the nodes you typically run like one or two proxy services for AJ one or two off services for AJ in each cluster and then a node service on each node.

So what's going on here is that when these services come online the proxy service comes online and tunnels out of your cluster to your centralized proxy service.

This ensures that all of your clusters don't need to be publicly accessible because they tunnel out through the firewall.

Then you have your order service.

So he uses the TKI infrastructure where each auth service has its own certificate that's been signed by the CIA central caa and then that establishes the trust.

So that this Ott service here can handle the issuing the short lived certificates for clients on age.

So let's see here.

I know.

So I know enough to be dangerous.

I'm not deeply technical and teleport because it is a sophisticated product.

Other things to point out here is that each service here writes to a Dynamo table.

It's events.

So you can have an event stream of what's going on there.

And then you can tie that in with lambdas like Lan does that send slack notifications and then all the sessions.

These are your TTY sessions.

Those shipped out to an encrypted S3 bucket.

So a lot of companies have tried to h logging and they will send those logs to service like single logic, not to single anyone out here.

And that's cool from like one perspective discoverability having it all in one place.

But it's not really effective if people are using TTY sessions like going into them more using top or running anything that does like ANSI codes to reset the cursor position and all that stuff, because you're not going to see what's going on when you look in those regular log files.

That's why you want YouTube's out playback and that's why they send a binary log of the session.

What's happening to this encrypted S3 bucket.

So deploying this.

So there's the way we've done it and the official way to deploy teleport and there are pros and cons with both strategy.

So let me talk about the official way of deploying teleport.

So the official way of deploying teleport is that you should not deploy it inside of Kubernetes.

It should be deployed at a layer below your container management platform.

So basically on the coast.

And that means like deploying it.

So it runs under System.

And the reason for that is that this is how you treat issues that could be even affecting Kubernetes itself.

So by deploying a layer underneath Kubernetes gives you the escape hatch you need to try things going wrong with your cluster.

For example.

The downside is that introduces more traditional config management strategies or dependency on that or you need to roll your own images with Jetpack or something that have those binaries built in.

We've kind of taken out a compromise to this where I could possibly deploy this as a health chart and that allows us to use the same pipelines and tooling that we have for everything else we do.

And we deploy the odd service that way the proxy service that way.

Using health charts.

And that also means that Kubernetes can manage the uptime and monitoring of those services and means we can leverage Prometheus with all the built in monitoring we already have in Kubernetes without adding anything custom.

So there's a lot of other benefits to deploying this.

Also inside if you go to cloud passes get up.

So GitHub slash cloud posse slash Helen files, you will find our helm file deploying teleport under Kubernetes.

But keep in mind that teleport like so many other things don't just run self-contained under Kubernetes they have banking services that they need.

And that's why if you go to GitHub cloud posse and search for the keyword teleport under our repos you'll find the Terraform module that has the backing services teleport so teleport.

So Yeah.

This is the module that you might be interested in if you're going to deploy this.

So this deploys the S3 bucket following best practices in the DynamoDB table following best practices.

So that makes everything else capable of being deployed basically inside of communities.

Let's see what else are Docker image.

Don't use this doctored image.

It's not been updated.

As you can see share this to.

Were there any other specific things you wanted to see.

So that was just a high level over.

I can go deeper or more technical if there's something in particular, you were hoping to see on that.

So you would major one from the start.

Yes Yes.

Yes Yes.

OK Cool.

So So yeah, we used that we deployed with the enterprise offering the enterprise offering enables G Suite integration for example, or Oct integration and then you Mac, then you use.

Let's see here.

How easy this is to show.

If we go to.

So we have our own charts for a teleport that we manage.

These are different from the official charts.

Jupiter there is more than that yet teleport enterprise off teleport enterprise proxy.

So in here, you can see that we provide some example.

So I'm making the assumption that one side provide access to a user.

I can also restrict this restrict them to specific resources.

Yeah So you map basically the role that is passed through the sample or so.

So when they authenticate it passes the role.

And then you can map that role to a role inside a teleport.

You can configure all the roles you want.

So then teleport you can configure like that.

This bill allows root access to this cluster or this role allows root access to all clusters.

Now you couldn't you couldn't make it more fine grained like read only cats that read only as a concept that only exists inside of the operating system itself.

And this is h doesn't govern that.

So you can control what it.

This is h account, they can.

I'm trying to remember C I have not been doing the deployment of teleport.

This is a Jeremy on our team.

He is the expert itself.

So I'm going to have to plead ignorance here on how that role mapping is setup.

I believe it's deployed through a config map somewhere.

Here the files would that be helpful.

All right.

Have you given more thought how you plan to deploy teleport.

Not yet.

I'm just tired.

OK, here we go.

So sort of playing it with Helen file.

And then we define these templates here, which are the configuration formats of the chart was similar to t descriptor.

Guess we don't have any examples of those in the public thing here.

Yeah Sorry.

No, we don't have a well documented explanation of how that works right now.

But if Yeah, I can that name Jeremy for some more information if it comes up.

Cool any other questions.

Sorry Dale I can answer that question precisely to get you.

Many other questions.

I mean, that was it regarding that.

Yeah or anyone else.

Any questions.

See you.

What were some other talking points.

Anyone have any commentary on the whole engine X-ray.

That happened in Moscow.

They sent out an email.

I heard it thing kind of worked out OK.

Yeah, good.

Yeah, there was an email that Nginx sent out to all their customers or something.

OK Yeah.

OK That's cool.

Yeah, I got to look that up, I'm sure they were like full disclosure.

This is what happened.

You know.

But nobody was arrested.

Bubble blah.

OK So nobody was arrested.

I'm just thinking two guys very well that in the original reporting it said they were brought in like you founders of engine x were brought in.

That's what it said in the original.

But you know maybe that was just hearsay or secondhand and not confirmed now.

I found it along the tweet creflo given how much of the internet is underpinned by this.

I was worried about it.

I'll post a tweet that I found that talks that that post a screenshot of the email.

OK, now we always strive to be transparent with you or valued customers and we're committed to keeping you updated.

OK enforcement.

All right.

Yes, they were just brought in for questioning the founders.

They were not technically arrested.

They were brought in for intimidation.

All right.

Could something like this happen.

Yes What was that code.

Something like this happened in the US like because of copyright infringement.

People get arrested.

I mean, if they want to they could do whatever the hell they want.

You know what.

What did happen in the US.

No probably not.

But the Patriot Act and stuff.

I mean, they could lock you away.

OK The good thing.

Again And yeah.

And if it's a Hollywood movie they might bust down your door with a SWAT team because you might be a violent offender or something.

I don't know.

I mean, this whole thing reminded me of the pyruvate right in 2008.

I think it was when they raided the data center.

Was it later.

I think 2007 2008.

So yeah, I mean, it's a you it could happen.

I think.

Like people could get arrested for it.

Remember Kim and the raid on his mansion in New Zealand.

That was pretty insane.

So if one paramilitary whatever planning Margaret Mead did the same thing with his mom as well.

They actually believed.

Oh, they did Kim Dotcom mom.

Yeah well made it personal.

Well, so I don't want any of you using a CEO we've started using it with a customer.

That project was delayed because we kept on waiting for the helm chart to become more stable.

And we also saw that they started a new escrow installer for Hellman project repo.

But then Ryan in our community called out this thing to my attention, which kind of bummed me out.

I don't know if it's true or not.

They said that they're going to be deprecating helm the helm installation approach.

So I get it.

As a developer kind of why they're doing it you know they're frustrated with some of those limitations.

But I don't like it in terms of the trend at this.

This makes me feel like the modern day Java installer where you can't just apt get install the regular Java you have to go sign up.

Give your email download a jar curl it or type it into bash agreed to some terms of service.

All this other stuff.

Is this the future is you.

I hope not.

Well, that's not even the case anymore.

I mean, I could just go say you know brew install open data.

No Yeah well good.

Open it.

Right but that's not the son version.

Well, so I'm hoping that there'll be an about face on this decision because I do like installing everything in a consistent manner using it.

I don't have a huge problem with it.

If it's providing other like if to do it with helm required more than just helm install it still if it required more than that, then it makes sense to wrap it with something so that it can just be you know Estela seitel installer.

The CEO.

Right I mean, you want to make it as easy as possible for your users.

Yes, yes, and yes, you do want to make it as easy as possible for your users.

But sometimes these are at odds with each other.

We had this conversation last week about home and how I think you actually brought up that you are disappointed that a lot of the home charts are insecure by design or insecure a box.

And I think their goal there is to make them as easily install as possible.

Does odds that make as operationalized production eyes as possible.

Oh, yeah.

I mean that the people behind this trio in particular tend to buck the norms anyway, so like another perfect example is you will not find in this video channel in the Kubernetes slack.

They have their own.

So that's just the way they operate.

At the same time fluently and blinkered and all the others mean, that's an issue and specific.

Yeah, I want to hear from them.

For me, the only issues that I see is like how do you make this replicated all across multiple clusters.

Like for example, you have a Def Con so you want to cancel this show on it.

I mean, calling a CEO, I command works.

Is it like some way to go for now.

Exactly I mean, it's basically like using the aid of US clock command right.

And we all know.

But we're not used to it obvious like manage to define our infrastructure right.

That's why we have tools like Terraform.

That's why we have tools like help.

So how do we I guess I'll be OK with it.

But then they need to provide a declared way for me to describe the state of my CEO and make it a you know if they can make it like it still apply and still destroy and Yeah, that, then I guess I'll be OK with it.

That said, I just don't feel right.

It doesn't feel right.

Yeah something doesn't feel right.

And so you know, we've been a big proponent of health file and health file is a terrible one.

But that's why I'm so happy today that there's a health file provider.

Now that's helping us integrate health file.

It's our chair from workflow.

So does that mean, we're going to need to have a CEO provider for Terraform maybe or something like that.

I mean, I would like to know the reasoning was.

I did it.

I mean, it's something that really is like I cannot solve flip with home and home sites.

We don't want to solve this case.

I mean, the only thing that I could think about as a CFD that I kind of an issue.

Yes, dear these are kind of poorly supported.

Underhill due to not if somebody could explain better than me that go for it.

It's basically like the race condition between the CRT getting deployed with the containers coming online and waiting for those resources to be functional isn't supported by health.

So you get these errors that like this resource is doesn't exist.

Well, it's really just in the process being created.

I think that's the gist of it.

I definitely encountered this problem.

We the way we worked around it is.

We just I think I'll hook exactly you know on file hooks with I think calling kubectl apply right or on the map.

Yeah, there's just some issues when you try to have a resource of a specific type.

And that type doesn't exist yet or it's in the process of being registered.

There's something going on.

And it's annoying, especially with spot manager.

Yeah And there's still need to have their customers here it is.

These days.

Yes So I know I know this affects me most as well.

And I know he's been contributing some powers to fix some of these problems.

But I don't know if the root causes to have been addressed.

I thought I saw something in a change like a while ago, a few days ago.

But that.

Oh, really cool.

See if I can find it.

Yeah be sure to officers.

If you see the steel city on manifest apply for you're going to want.

Oh, nice.

OK So they are doing something along those lines.

Yeah, they'd be stupid not to.

Cool Yeah.

Well, one of the last things I wanted to cover briefly was I know many of you are using help and could be impacted by the shutdown or the eventual deprecation of the official charts repo.

There are a lot of charts out there.

And I don't necessarily want to have to figure out where their chart registry is or make sure that it works is up to date.

Very often you'll find the chart registry.

You'll find the chart.

But they haven't pushed the latest package up and you want to use it.

So we've been there.

There are a few of these plugins out there that add the get protocol support to help.

So you can just use like a get slash scheme.

This is the best one though, that I found and I forget his non GitHub name, but he is.

He's in suite ops as well.

And a hell and file user.

So this project together with helmet file works.

Literally point to projects anywhere on GitHub.

So let me see if I find it console slash in this repository.

Find an example using its Yeah.

So here's an example of the external of the chart of the chart repository we're pointing straight to a GitHub repository and pinning it to a committee.

So this combined with helm file makes it pretty awesome because you can really deploy from anywhere.

You can also deploy from the local file system.

If you want to works with help 3.

Good question.

I haven't tested it yet.

We'll be doing more home 3 stuff in our next engagement.

I've been playing with help 3 on home tile.

How is it working for you.

I am in the initial development phase.

So I haven't deployed anything, but it looks like it's working just fine.

I like it.

I've got work mind templating glinting everything's going fine Woo.

Same here.

Looks great.

OK, perfect.

And you are using him file as well.

Now Entered Yep.

Yeah, so we're working on where we're going to come up with repeatable like what we talked about last week.

We're taking the Terraform route modules approach to helm deployments as well.

So we've been asked to come up with a highly repeatable way to deploy a lab or Jenkins or center cube you know just spit them out all over the place.

And each time they get deployed they need to be fully production ready you know.

Yeah And so that's like the route modules approach where this is our route module and the route module in this case is going to be a good.

It's coming together.

Have any of you helpful users kicked the tires on the helm file provider.

Not Yeah.

We've been working with a little bit.

It'll work just fine if you're not using Terraform cloud.

But if you're using Terraform there's some challenges.

The idea is to have everything from the cluster to the apps are on the cluster all and Terraform.

So yeah, for at least everything up until like you're up through your shared services.

Not not necessarily using it for application deploys but using it for things like GitLab deployment, for example, to use Andrew's example, we don't get to use that to deploy you know like maybe cert manager to the cluster we defined the cert managers.

All that stuff.

And help file because that's how we have it today.

And then use Terraform to call that I know it's just entirely.

It's because of the dependency ordering.

So you can tie it altogether more easily.

But yes, it is layers on here a layer is it.

And I think that could be a good.

Next step that would be easily something that would build upon like those root module helm files that we're doing now.

Would just call the home file and we did something.

So I work for a government systems integrator and so the government has been for the last year or so, putting out their RFP is with tech challenges like hands on style meetings where you have to your team has to go into their office and they sit you all down.

And they hand you a piece of paper on it with 10 user stories or whatever.

And say, OK, you have six hours to go and they hand you keys to an empty AWS account, and they expect are running at, with full infrastructure you at the end.

So you have to come in with your infrastructure as code and everything all ready to go.

So we built a rebuilt like what you were talking about with Terraform and helmsman.

And it worked pretty well.

It was not very customizable it would just it would do its one thing and it did it fine.

You know.

But like if you decided, well, I don't need Jenkins.

I'm going to use something else instead too bad you're getting Jenkins.

So our work that we're doing now is kind of expanding on that into coming up with ways to say, all right.

You want to Cuba in this cluster do you want it.

You want it in the US you want it in Azure you know and then OK.

You've got your Cuba cluster do what you want in your Kubernetes cluster, you want you know GitLab you on Jenkins do you want a Server Manager Bob Barr you know and that's kind of where I have to ask you a little bit more about that project later on.

And I have a question on that as well.

I think more in general with using RBAC with eight of the S and I am we guys done that successfully and with ease Yeah.

I mean, it's right now.

You know we haven't done anything super fancy with it.

It's you know when you stand up.

Yes it.

It's you can map users or rolls to rolls inside Kubernetes.

So I can say anyone who has assumed this role in I am when they log in Kubernetes they are this role in Kubernetes.

So we have a role in I am called Kubernetes master and when they log into Kubernetes they get assigned cluster aban.

Right Yeah.

So it works fine.

Are you just managing everything on the I am side not with the it off config map or do you also have items of that conflict map as well.

Yeah the US off config map, you need to add what the real name is.

But not the use this rule that you are assuming.

No just the role.

You don't need.

I mean, you can map users.

But we think that's done like you know because you want to map the role.

Yeah And then later on.

And I am you get you map you know who is allowed to assume that role.

So Yeah.

So you can see them.

For example, in a group that actually has that rule attached to it or something like that.

Yeah OK.

Yeah but all right.

So do you then have to provide them the who config with that roll assumption in your post.

No we give them the ability to generate the kubeconfig so AWS chaos update kubeconfig will generate content with the roll associated to that user.

It's a flag.

It's a dash dash roll or an arrow.

Oh oh link it sounds good.

Yes, please.

Yeah, that's where I got a bit hung up with allowing them to just use the end of this while angel.

Yeah, we started out, we started out passing the Kip config around all over the place and quickly realized that that was a horrible way to do it.

Yeah has anyone been using the new farm gate on cars.

I haven't.

But looks interesting.

I'm not sure what to expect.

But I think we have a tear from my book for it, which prototypes and tests it.

And you can look at that.

There's a lot of limitations to using it.

And I think it's a little bit early probably to use it.

You'd like that's more the implementation.

I like that dp has done with their own GKE.

It takes care of a lot of the just management headaches and I just can start.

My applications with databases like I'm actually building my own house and I like the bigger things.

I'm going to go out you forget about or some dependencies that just get overlooked you know.

Oh, yeah.

Cause like I'm sticking with our back.

For example, I can implement it very quickly with Katie without an issue with it always is a little bit more work.

Yeah, everything's just a little harder like what Katie.

There's just a checkbox to have right.

I mean, it's that kind of stuff kinship shouldn't link with me.

I appreciate it here in the office hours.

Cool right.

Yeah here's our module and here's the first pull request.

And I Andre links to the but the you know, some of the limitations here.

But just to emphasize what those limitations are.

I just copy and pasted it in here.

So you can kind of see what some of those are.

So it doesn't support network load balancers right now apparently.

And what are some other ones.

Yeah, you can't read Damon says things that doesn't really make sense anymore.

I guess since there's no traditional concept of a node in that sense.

So what have you decided you'd like get a dog.

A deployment.

All of those.

Well, Yeah in 30.

Yeah, if you need to use data to log for data.

It's not for you, right.

It's for the people who are running temporary workloads.

It looks like somebody you could just run jobs on like.

Yeah Yeah.

No like if I were like.

So we run.

We run Jenkins and cabinet is now.

And we run.

If some miracle worker pods like I could easily see us using something like fa gate to run our and federal worker pods without having to worry about the underlying cluster because they don't need any of that stuff.

They don't need a load balancer they don't need you know they're just compute you know it's you and RAM that's all they need.

And it looks like all that far.

They can provide it.

Cool well, we're coming up, up to the end here.

Are there any other questions to get answered before we wrap things up for today.

Well, I'll take that as a no.

Hi, everyone.

Looks like we reached the end of the hour.

And that's going to wrap things up for day.

Thanks for sharing.

I learned a bunch of new things today as usual.

As always, we'll post a recording of this in the office hours channel.

Feel free to share that with your team.

See you guys.

Same time, same place next week.

All right.

See but.

Public “Office Hours” (2019-12-11)

Erik OstermanOffice Hours

Here's the recording from our “Office Hours” session on 2019-12-11.

We hold public “Office Hours” every Wednesday at 11:30am PST to answer questions on all things DevOps/Terraform/Kubernetes/CICD related.

These “lunch & learn” style sessions are totally free and really just an opportunity to talk shop, ask questions and get answers.

Register here: cloudposse.com/office-hours

Basically, these sessions are an opportunity to get a free weekly consultation with Cloud Posse where you can literally “ask me anything” (AMA). Since we're all engineers, this also helps us better understand the challenges our users have so we can better focus on solving the real problems you have and address the problems/gaps in our tools.

Machine Generated Transcript

Let's get this show started.

Welcome to Office hours today is December 11th.

2019 and my mom's birthday.

Happy birthday mom.

My name is Eric Osterman and I'll be leading the conversation.

I'm the CEO and founder of cloud posse.

We are a DevOps accelerator.

We help startups own their infrastructure in record time by building it for you and then showing you the ropes.

For those of you new to the call the format of this call is very informal.

My goal is to get your questions answered.

Feel free to own yourself at any time you want to jump in and participate.

We'll host these calls.

Every week we automatically post a video of this recording to the office hours channel on suite ops as well as follow up with an email.

So you can share it with your team.

If you want to share something in private just ask me and we can temporarily suspend the recording.

With that said, let's get the show started.

I do want to cover some things that we were just talking about before we kicked off the call here, which is on efficient ways to track your time as a contractor.

If you're doing that type of stuff.

But also here's some other talking points.

If we run out of things to cover a public service announcement here it was AI believe that brought it up again and reminded me this morning.

The official hymn chart repositories slated for deprecation in about a year and there's a schedule announced for that which is on this link.

I want to show you our new slack archive search functionality which makes it a whole lot easier to find what you're looking for in the suite office chat we use.

I'll go live for that I want to show our new guest modules for two kinds new managed node pools for each case as a result of the announcements from reinvent and then one of the things Andrew also asked about were like what are the considerations that we take into account when thinking a CIO platform and I expanded this to be set out.

But let's just quickly jump out 2 this thing about tracking time.

What was that cube called that you were talking about.

Andrew it's called time M��ller so you get the app.

The app is free but you can setup.

I think it's got eight sides or six sides or one to three it's got eight sides so you can set up eight different eight different tracking targets whatever they are whether they're different clients Lauren Berman or no I don't.

I didn't buy the I didn't buy the physical device yet but I have.

I'm using the app oh that just lets you quickly tap on.

So I set up I set up three I set up I've already set up three targets one is my one project that I'm on and the other project they run and then the third one is overhead, which is like time that I spend that I can't charge to either project and so I can hit Start stop on any of them as I'm working through the day because I like this morning I started to work I started work today this morning it I don't know a 30 and I spent an hour just plowing through emails and I thought and I stopped myself and I'm like how am I going to charge that last hour that I spent because it was like an email from this place and then an email from this place and like I didn't know.

And so I was like, all right I've heard about these devices and stuff.

So let's look into them.

And so this app is free I'm using it now I'm thinking about buying the actual device because then you don't have to look down at your phone you just pick it up and move it to another face and it automatically sticks with your phone with Bluetooth.

The device itself is like $69 or something but you know hey it's Christmas I wonder if there's some way to integrate that then with either zap beer or harvest directly because we've been using part of this to track time.

Currently, we offer the following integrations toggle Jira harvest.

I call Google Calendar and outlook calendar cool.

And then who was just talking about walking time walking time has come up before it's me Yeah.

So I use it to basically track however my projects.

So Brooke's great but I don't have to do any serious billing.

It's just like OK most of us at a time of day on this project and had to do so.

And so.

So walking time is kind of like self-inflicted spyware for consultants.

It keeps track like what files you're editing what things you're doing so you can back that out to reconstruct your time as well as track your time at the same time.

Well for developers.

It looks pretty good.

Yeah it's as if lots of integrations I mean also into trade.

So if you're emails I think if you're all right I'm there.

But it's a cool program to just track like roughly how much time you spend.

Anyone else try walking time to know mark champ schemes and using that.

Cool well let's shift the focus back to DevOps first I want to see anybody have any questions related to Terraform communities home et cetera.

Well when we jump right into your question, Andrew then you had asked me just before we kick this off kind of what are some considerations that we took into account when evaluating CCD platform and I just jotted down some notes before the call.

So I would forget it.

But before I talk about this I just want to say that when we started working it was like four years ago and yes there was a rich ecosystem of CCD.

Then and I think that ecosystem has 10x since we started.

It feels like everyone's pet project is due inventing US ICD platform.

And that's why more than I could possibly come to evaluate today.

So I think there are a lot of more exciting options available today than when we started looking at things.

And here are some considerations.

I think are basically requirements today.

So everyone can see my screen here one big thing I think is approval steps since most companies still don't feel 100% confident with continuous deployment to production for example, having a manual gate there.

Ideally with some level of role based access controls who can approve it would be great.

Now since we live and die and Slack having that integrated it's a slack to me feels like a natural requirement as well because I just want to whisk it.

I just want to get a team.

When my approval is required kind of like pull reminders does for pull requests shared secrets.

This is like a Biggie for me and one of my biggest complaints about like GitHub Actions is that there's no concept of shared secrets right now.

So when you add like 300 projects like cloud policy and you have to have integration secrets that are reused across all of them.

How do you manage that.

If you have to update each one.

So we end up having to write scripts to manage the scripts that manage the scripts the of.

So scripting basically updating your shared secrets across your GitHub repositories nutcase.

I would rather have an organizational thing where maybe teams under GitHub could have shared secrets.

I think would be a pretty cool way of doing it.

Easy personalization.

I think you get this pretty much out of the box these days when we started circle C I for example, didn't have that.

Now they do circle to other things like easy integration with Kubernetes.

I don't believe for example circle s.I. has this if you're a big time user communities.

I don't want to spend half my pipeline just setting up the connection to the Kubernetes cluster.

I like that just to be straightforward.

So this was one of the pros why we selected code fresh code fresh makes that really easy because it has a bunch of turnkey integrations to the most common systems we'll use.

What else container back steps again.

When we started this was not the standard.

Now this is the standard.

Every CI/CD pipeline basically use the steps backed by containers.

So I think that's fair.

Supporting web hook events from pull requests originating from untrusted forks.

This is a Biggie.

If you do a lot of open source how do you handle this securely.

And the reality is you can't get by like Travis c.I. will run the pipeline if you don't have secrets in it.

But these days you can't do anything without secrets so that's not a fair restriction.

So there needs to be a secure way to do that.

Typically, it's done these days using chat ops where authorized individuals can comment on the pull request and then that triggers some events.

And in the act of doing that will actually temporarily give that pipeline access to secrets that could technically be expo traded but it's a necessary risk to take.

And just set one of those up and one of my open source projects and it was a frickin' breeze.

Q did you do that with code for sure.

Right Yeah.

So Yeah it should be a breeze.

And technically that is a breeze in code fresh.

I will say that I've had a lot of issues.

But I think that's because our account has been grandfathered in through so many upgrades that are our something's wrong with our pipeline.

So we constantly have to reset our web hooks and stuff.

But Yeah, it is technically very easy in code for us to do that.

And then you just look for a slash command or something in your trigger.

Yeah let's call other nation's natural segue into this chat UPS our request.

So like I said there if you trigger everything you want to check on a pull request automatically your you'll quickly fill up your pipelines your integration tests are going to take forever.

So I think being able to do a conditional type stuff on labels and comments it is a requirement from my system.

It should make it easy to discover all open PR so it's easy to trigger this is why things like so Jenkins does this right out of the box.

But we're not out of the box but with what.

I forget what the plug-in is for Jenkins that does that.

You're using that though right.

Get a plug-in.

Yeah Yeah Yeah.

Couldn't fish had this but they just deprecated that view.

So you can no longer see all of them pull requests and re trigger the pipelines for that.

I think this is a requirement because Paul requests have additional metadata and that noted that you can't sit you can't easily simulate that just by triggering the c.I. build on a branch because you lack all that other metadata about the pull request.

So a way to trigger pull requests that way to trigger builds or jobs from all requests is a requirement manually in addition to automatically should support remote debugging.

So this has been one thing like the road for us, for example, that hasn't been a possibility until recently.

Now they've added support for that.

I here really good things about it.

Basically you want to be able to remote exact into any step of your pipeline for triaging because God knows soil tedious to debug things if the only way is to rerun the entire pipeline from beginning to end and not be able to jump in or look what's going on inside of that container circle circles has been on me about that for a very long time.

There's been tricks we've had to use like teammate to get around that in the past.

So making sure that that's possible.

Question for those of you using Jenkins as Jenkins support that with any plugins today.

Now you may be able to reach into a container that you get into a step in a pipeline.

No it's weird I get it.

I'm sorry.

Oh no you go ahead sorry.

What I've usually done in the past.

So right now, I've got a I've got Dr. builds and I'm using the multiple branch pipeline from Jenkins.

Yeah you usually do as I try to adjust if I'm doing multi container builds I'll try and keep it in a format that I can easily recreate it locally.

So you just try to recreate the same bug then money into Jenkins.

And I haven't had many issues there.

There are a couple of nuances with the chicken on plugins but as long as you don't update Jenkins you're OK which isn't always ideal but it's kind of sad reality I guess in open source.

You bring up but you jog my memory for another thing should support local execution of pipelines for debugging.

So this is pretty cool when you want to iterate quickly and be able to trash things is just to be able to run that pipeline through locally code fresh has had support for this for some time now.

I don't know about circle or Travis.

I don't get it.

Actions you can run locally or at least people have come up with workarounds for that.

Jenkins does that support local pipeline executions.

I would run a local instance of Jenkins to do it.

I've you laugh but I've done that and it works fine.

I know it's just I get it and it works.

Given our demographic of who we are and what we do.

But I wouldn't expect my developers to rattle like Dr. Campos up.

Jenkins and then I the integrations for that and then I guess there's I guess we could perhaps elements Della Jenkins you're not running local Cuban edits is not enough.

That's the good conversation to have.

I would love to debate that with anybody else who is and learn from it.

OK Yeah.

I've actually seen a GitHub action to run a local company just inside of GitHub action which model that will give us tonight.

But we're not doing ridiculous in a flattering way.

I mean that's somebody I spent a lot of time around.

Yeah derailing the conversation but we're not doing it yet but we plan on doing using kind k in yen which is Cuban using Docker in order to test.

Right now we are developing reusable helm files like similar to have cloud plus he has a bunch of reusable files.

Obviously we're like our mono chart like a reusable chart or a reusable helm files and files.

And we need to test them.

So our first iteration is just going to be you know smoke test it home file apply and then wait and then help file destroy you know it just if those work successfully then we're going to call the test.

Yeah successful.

So right now we're just going to deploy into our Kubernetes cluster, but we don't want to dirty the cluster with that stuff.

So we're going to use kind always should be really cool for that kind of show before all this is where things like spiral out of control gets worse standard charts it's true but like what if you had a chart that wanted to test external DNS and then you need the you need the iron rule for that.

You know there are limitations.

The charts for the charts we're working on right now are just services like we're working on a GitLab and a Jenkins panicky cook has any links to the most common thing that we end up needing to use Terraform for in combination with like Helm is for the backing services of chart dependencies of which roles or the most common.

Have any of you seen anything that auto provisions roles somehow as part of Kubernetes like a C or D for animals you can use to Terraform Cubans for better I think and Atlantis to do that.

But what about it.

Their form could be exactly the one by rancher.

I guess there's a few now.

Yeah I'm I'm not doing that.

I don't think I would work because of the debug implications of c or D. I think it's a beautiful beautiful in principle what I want is a better way to choose visually.

Basically what I think I would want is kind of like a circle Travis code fresh slash UI kind of for looking at the CRT is executing.

So there's like a window into that system specifically for this year.

This link relates to flux as well.

I haven't used flex firsthand but my understanding is it's also suffers from some of this visibility to debugging things when things go everything is great when it works but like when you're developing things never work.

So how do you do that.

Just quickly finish up the remaining things on this bucket list of things for conservations risky platforms making it easy to tag multiple versions of a Dockery image.

It sounds obvious or trivial but I've seen systems which literally require you to rebuild that image for every tag you want to have.

So being able to add in tags to an image I think is valuable as well.

For example, we always like to tag the commit shop of every image.

In addition to perhaps the release shop.

Not really shot the release tag.

All right.

Moving on let's support pipelines is code.

Basically the ability to declare your pipelines alongside your code itself and then ideally, the ability for whatever system you're using to auto discover those pipelines.

So GitHub Actions is a prime example of that where you don't need to click around anywhere to get those actions it's working.

Let's see should support a library of pipeline or pipeline steps.

So this has also become very popular these days.

Code for ash has their steps library get up actions I think.

Does this really well.

What platform is it that has orbs as that circle you know has orbs.

So that concept I think, is really important.

So related to this and there's few of you on the call using Jenkins.

I know there's.

I know this is possible in Jenkins as well but there's a lot of my senses I haven't heard anybody really talk it up or a lot of people dislike having this central library of like declarative pipelines within Jenkins and that it leads to management or software on the lifecycle challenges and instability.

Is that your feelings as well.

If so can you elaborate on that and what.

Why does Jenkins fail when circle C I put fresh types seed on this so I'll answer that more generally.

Tools that let you do whatever the hell you want.

Fail because people do whatever the hell they want.

I like tools that provide an opinionated way to do something because then it's like this is the correct way to do it.

So do it this way people do.

Jenkins shared libraries in lots of different ways.

Some of them work better than others.

So there is a way if an organization lays down the law on how to do it and stick to that it can work well just like works.

So like I actually just posted a link to some of our job yes.

Code for ethics.

Because a lot of our Jenkins is open source because we have alternate media using our Jenkins please.

Look at it.

But we tried to create guidelines in this is like create pipeline so a lot of it.

Jenkins job yourself lots of Groovy.

We're all soft offers.

You get a lot of let's over abstract our job yourselves so we input classes in Groovy which is super fun.

One of the big issues that we found is before my time here is that it's just like a whole lot of code.

And people like deviate from those.

And as the standards change over time you end up with series a through Z evolutions of it.

Yeah it's hard to keep everything all together.

And that prompted some of our assessments.

When I look I looked at code fresh I kind of rolled it out a little bit.

But we have like an hour ago and a junk c.d. and a Jenkins X 50 halfway spun up and I'm curious because I push my team on being prescriptive.

That's a good thing.

Jenkins X is so prescriptive that you can do it only one way versus Argo Argo underneath Argo.

You can do whatever the heck you want it's an arbitrary code execution job platform.

But Argo seat is a little bit descriptive and only does the c.d. portion of it.

So you guys how do you pick the right amount of prescription.

Example I'll always give is angular versus react no angular has a.

This is the way it should work.

You know it's much more prescriptive react is more do it however you want to do it you know you want to do it in just vanilla JavaScript great.

Do what you want to do it in typescript great.

Do it.

That's the example I usually give.

Not to say that either angular or reactor better than the other.

I'm not saying that but that's the example you should be using.

We as a thread go city and Jenkins back and forth.

And they just announce their deprecating my not that great they just announced that community would be taking over maintenance of the project.

So whatever.

Yeah and we have there's of course the library in Python that Alex is maintaining that's a pile of Python scripts to manage pipelines inside of that you can implement a yellow parser inside of that Python.

So we've had pipelines for a long time.

There's much better ways to do what we did.

But at the time that was what was a good option.

But I'm looking at Argo how you do everything in it manifesto basically.

I'm liking that that format but I'm seeing limitations because Argo city is much more of a narrow focus.

So for us to use it we're going to have to plug and play like six different components instead of with Jenkins X squared, we would have everything all in one.

We have nothing written in angular and everything written in React so I guess we've made the decision that we prefer to have flexibility, but from that analogy but Yeah I'm trying to figure out the right pieces because there's a several parts here considerations that I don't have answers for in our city.

And so it's a very timely discussion because the bank has made sure this was last week.

So I would love to follow up on this.

If you join us in a subsequent office hours on what you find because Argo has been in our kind of evaluation bucket for technologies as well.

One of the things I find like the approval steps I don't find in a lot of these things is a easy thing to implement.

OK Yeah.

Integration was slack.

Pretty much everything does that these days but in varying degrees.

I would like to say I must integrate with slack and have control over the content of the messages.

Basically there's a lot of systems they'll integrate with slack but they just spew generic messages that are kind of irrelevant for half the people.

So being able to target these messages to groups or teams and then also customize what shows up in the messages I think is something important.

And then support get up deployment notifications.

Now we are a get up shop.

We use it up extensively but I get how it has a special dialogue now where you can see where a request has been deployed.

Tying into that somehow I think is a nice thing.

And then lastly on pricing if you're going for a SaaS solution I really think that the solution should be fair to smaller companies not just enterprises support a standard price per user model and unlimited builds.

Basically the product shouldn't limit how much you want to use it and it should provide a reasonable escape hatch for going over those overages without forcing you into an enterprise locked in so that that's my list.

Any other considerations.

I overlooked that since we have a bunch of people here and a lot of those seating experienced that's a show.

Well Yeah it's a lot to ask about medication year well there once that when coach rush doesn't do so hot out because they get behind the wheel only with enterprise enterprise tax enterprise.

Yeah there is a perfect plug for the B SSL wall of shame website again.

What is it what was that.

It's the soda tax or I think Salvia whatever does someone of you know the tool CI/CD de tool that allows you to include the text of the approval step.

Like for example a default see how I'm charged or the terror from plan or something.

I really would love to f but I thought it was part of the question one more time I just did it here.

I would love to have a tool that allows me to see the approval as a terror phone plan in the approvals that like include that text.

Oh Yeah Yeah I get it.

Yeah that would be cool.

So like it slacks to the chair for most simple channel and then there is a approve or reject kind of button below that that'll be really good.

Yeah for example.

I don't mean bit apartments do this really well because they basically collapse below each other and then you take a proof.

Basically you see the step from before that.

So basically you have some contacts in the eyes.

I side one sexy eye it doesn't really badly because if you want to improve you have to do it from a different view than the actual one of the top, which I really, really dislike because let's be honest there's a lot of people that don't really look at their circle see eye to see a single step and just approve because I have to so we have a we have an implementation that we did by hand.

This was some time ago for a code fresh demo.

What's cool about this implementation that we did by hand is it's just using custom steps so it's you could implement this in any pipeline that lets you run containers.

And it does more or less what I think you're saying.

So this looks familiar if you're using Terraform from cloud or Atlantis.

It looks familiar but I think it's a little bit more polished.

And basically what we do is we have we're using the cloud posse GitHub commenter so cloud posse slash get commenter tool and then we have a number of templates that are in this testing that cloud posse dot Dockery reposts if you want to borrow that but then we output this plant.

So I thought this was really clean here right.

This is I feel like it's been a perfect example of what you're asking for.

I would like this now not only as a get up comment but also as a Slack message with the buttons at the bottom.

But it suggests also we stripped out all of my screen hijacked and then and then we stripped out all that superfluous information using scenery.

Now this was done as you can see back in April before Terraform Xerox.

Well I don't know scenery as updated support for Terraform 0 to 12 yet but Yeah.

So then clean output and then just clicking on this or this takes his code fresh and I can approve it or reject it right from here OK.

Yeah that's clean.

I was thinking like how do I approve now from that Because people will in ultimate up on whatever tool you are using and don't see the context.

So you have to kind of force them to actually click on these buttons.

Yeah which I would really follow like actual feet the context of what you are proving is super important.

It is because we never check it out locally and try it.

And we need to.

And then obviously a little bit of reward at the end if your plan is successful or you apply a successful any.

Right it's beautiful.

Thanks Yeah.

Any other considerations.

I left out here in this list afterwards so you guys can use it to form your requirements.

Did you have other platforms like like OS or so this has not been a requirement for us but it's a totally reasonable expectation for others.

You should support multi OS platform.

Well Yeah it's my request is Windows central Android.

We also have AI don't know if this requirement is a good thing or a bad thing but for city who can trigger a rollback from Gucci for example.

So we have a whole bunch of our back access for all of our pipelines.

I find it less helpful than helpful.

But some people require our back.

Yeah e.g. for approval step so I mean that's somewhat related to approval steps because approval steps mean nothing if you don't have our back or a attribute based results.

Yeah I was just trying to figure out approvals steps for our city because in our case here They don't exist.

I have not figured it out but there is a diff in Argo where you can see a difference that could produce manifest files that I haven't actually used it for valuable things.

But it's the only tool that I've seen natively show like every Cuban entity object in a UI where you click on it and see the difference between different places like the like.

Well you should talk to our friend here.

Here he is.

He has something you might like.

And then also help.

Def right.

So are you guys using charts for your releases.

Yeah well while we have helped me it's broken so I looked at home file.

We have some deft deficit and things locally.

However Argo is what templating expanding out the files.

So we're not actually in the Argo PSC which is only where it's not in production right now.

You can't run defgen in production because there's no tiller.

But I'm still sorting all that.

Is where why did you mentioned file in that.

Oh because help file is basically an automation tool around helm and that is and help file requires this to function.

And what I like about it is if you're using held file you basically have a Terraform workflow for dealing with home charts.

So there is a plan phase and an apply phase with a whole file.

But it doesn't.

So I'm currently using files to document what version of what chart is installed for a non application stuff.

So for a chart museum for manager those things.

But it's not working and I haven't bothered fixing it for some that stuff.

So it's not very production ready on that but we are thinking about killing it and moving everything over to Argo or whatever we do for 3D such that you don't have to manage things in cheaper places.

Obviously that would lose some functionality that you're mentioning that the depths of the competitive stuff but trying to understand what the pros and cons of each would be.

Let me also show you one thing that I really love about using a file as part of party workflow.

So we always bring this up like the four layers of infrastructure you have a foundational infrastructure at the very bottom next one up is your platform next one is your shared services and next one is your applications and they're back end services and usually you treat each one of these independently.

So obviously how file works really well for your shared services like the for deploying all the things your platform requires.

Things like external DNS cert manager, et cetera, et cetera.

But it's also really awesome for layer 4 because in our case like so here's our example app where we showcase a lot of the release engineering type stuff for workflows that we use.

If you go into the deploy folder here is an example.

And this shows now.

So this example app has these dependencies and now how file also supports remote dependencies.

So what's cool about that going back to my earlier requirement about supporting like a shared library.

What's cool is you can create a library of services that use applications use and use version settings point to them.

So what I don't like about my example is I'm using local references here but just imagine that these can be remote and then and then inside of the releases folder the developer can define how his application is deployed.

So this is just so it gives the developer declared way to deploy their applications and it becomes even more declarative if you use something like a modern chart which is our chart here and the motto chart is basically a help chart as an interface that you can use to deploy.

99 percent of apps that you're going to be deploying on Kubernetes without needing to write a custom Helm chart.

That's enabled because when you use hell file values becomes your DSL for your declarative interface.

In this case for in your applications.

So since we're systems integrators and we can't we need to be more.

We want to be opinionated but we need to be general enough that it supports all our customers use cases we because we support quite a lot of variations of configuration in our mono chart.

But if you are you building this for your own company you don't need all these variations and you should standardize how you deploy your apps to Kubernetes and then reduce the permutations that are possible there.

Going back to Andrew's comment about if you give everyone the option to do anything, they will do everything.

So you want to eliminate that and this is how you can do that.

So you just have a policy hey you use user use Mongo chart.

Any version you want but that's cool because now you can just look at the interface that you're using of mono chart here.

In this case of version zero 12 et cetera.

So I have to go ahead.

What we want and what we have currently deployed our super different.

But if I focus on what we want and how we think we want it.

This is super useful because I haven't looked at your example I picked off one of our versions.

We Stole a bunch and simple stuff really fun.

Moved away for that tangible stuff into darker and Docker images and set up that you know Travis builds the darker image questions into darker eyebrows here.

Those are all get triggered.

And then at that point there's a new tag on the image, which would then need to go into this chart and get updated.

At first you put the chart in the application repos and we're trying to use a well to the whole formats of how you inherit chart inheritance it's kind of changing around with some of the newer stuff.

So haven't figured that out perfectly.

But do you have an example like this that has like get up set up for when an application change happens and then everything else needs to follow suit.

So yes and no so we also have a.

So here's one nice thing about what we're doing here.

How do I set this up without taking too much time.

So one of the problems I have with a lot of these pipelines that are out there is basically they wake up when they see a new image and then they deploy that image somewhere but that is like one third of the problem.

The other 2/3 is like well what's the architecture that this image gets into.

And then what's the configuration and secrets that I need to run this.

And usually those three things are all tied together.

So deploying an image when the image changes without taking into account those other two things is worthless.

Yet that's how it is demonstrated.

And so many examples out there.

So maybe it works for something people.

I just.

If someone can show me I can't get my head around it.

So our strategy is quite interesting and I call it the helmet cartridge approach.

So I want containers to work like Nintendo video game cartridges and just for the record I mean I stopped playing video games when it was the original Nintendo with Super Mario Brothers.

I so like don't don't ask me more questions about this because I quickly don't know what I'm talking about.

But as really as two days I do.

So I want a Docker containers that I plug into my cluster.

And it just works.

And that means that it basically needs to also habits architecture deployed alongside of it.

So what we've been doing and it's worked out well for the last few engagements that we've done is we bump.

You see this.

This deploy directory is in my application my microservice.

So we're shipping all of this together at the same time.

So this container when we publish it what we can do is we can actually call help file on the help file inside the container and then that deploys the container along with its architecture and image at the same time at a premium.

I don't know I feel like I derailed a little bit there answering specifically or your question there.

Adam So I asked the leading question.

So the appropriate answer.

Yeah I'm still wrapping my head around the right amount because you're right the demos that I've looked at only solid portion of my problem.

Usually the image part of the problem and figuring out how I should thread everything together when there's a million choices is like I can take away.

But I don't know if it's the right way.

So this conversation's helping for me to just think about edge cases that I haven't already thought of.

Yeah so Yeah.

So then this basically answers that.

And so our pipeline for that.

What I don't like about our example app is I did something here that was me experimenting with a way of doing this but I we're not using this particular tool right now for this.

So I had this deploy tool as part of the image that implements like a blue green strategy for the planes container implements a rolling strategy for deploying this container.

My point was more of that is just you see here we're calling Helen file as a step.

And then if you look at the pipeline.

So we ship the pipeline here the example of the pipelines that if you go to the deploy step here here you can see that we are just deploying with Helena file using the cloud tool but you don't need the flights.

Well my point is you could actually just call home file directly.

And that's what we do.

All right.

Let me digress.

Let me pause on that.

Any other questions related to anything we've talked about or something you.

I was going to mention the the the way that most home turds get implemented right now is sad about and that is you know that if you take all the defaults what gets stood up is is like insecure dev.

Oh yes.

Yeah I want to.

I want to start a revolution to flip that you know.

Yeah you forget to set some setting.

You're going to fail up into the more secure space.

You know so that if you set if you don't set any if you don't set any parameters and you take all the defaults what you're deploying is production ready.

I agree with what you're saying in principle.

It's just there's a difference between operationalizing something inaccurate is very different from getting a PRCA and the difference between victory and success and failure is often early quick wins.

So psychologically speaking practically speaking I get why these charts try and be as turnkey as possible that like look in one button.

I get this full on Prometheus monitoring cluster with Griffon and I fully integrate it to everything in wow that was easy.

But yeah operationalizing it is now going to take 80% lot longer because there's so many things and considerations.

You gotta do.

The thing is that if we optimized for the latter which is what we really need to go to production most people would fail because there's so much that has to happen to get it right and it's non-trivial like.

And that's also where you're getting a more opinionated like.

So does that mean we are out of the box require a C So when 99% of the stuff does analysis so and then if you're going to use us.

So how are you going to do us so well.

We use key code you use access somebody else uses whatever this is like the permutations explode that you can't support.

And that's what's so aspirational.

I agree practically.

I just don't see it working.

So the way.

We're coming about it for my team right now because right now we are coming up with hem files that act as we're taking we're taking the approach that you guys have come up with for and maybe you haven't come up with it, but you're using it for your Terraform route modules we're having.

We have help them find root modules where they're literally.

This is the way I want you to deploy production GitLab.

It does require SSL you know and for MVP where we're going to support one sample because that's what we've been doing at first.

We're not saying it requires key cloak.

We're just saying it requires civil rights.

So whether that symbol coming from OK or smoke coming from actor or whatever we care.

Yeah and then what we're doing is saying and this is what I love about him file and I'm learning more and more about him file I haven't used it for that long yet but we're saying we're using the environment's functionality to say the default environment is production and if you say dash dash environment equals dev then it doesn't worry about all that stuff.

Exactly Yeah.

No environments are great for that.

We see people like you.

Let's say we have humility.

I know where we're talking layer cakes here.

Layers and layers and layers.

OK that out of the way.

OK what's cool about health file and environments is environments.

Now let you as a company define the interface to deploy your helm releases, which is in a lot of ways perhaps the final layer.

If you're using help.

This is basically how can we as a company say we want to consistently deploy our apps using Yelp because helm charts have a ridiculous variation on the schema is basically a values no to at home charts have the same interface unless it's developed by your own company.

So help file by way of environments lets you know to find a consistent interface deploy all of your apps that developers can understand.

Yes there are a few downsides to this.

When you want to figure out how to implement a new toggle or whatever you've got to go five layers deep or whatever to figure out where to stick that in that's our predicament right now.

Source leverage.

The goal is like the goal is if one of my people, says helm file apply GitLab and it won't work because it's got required variables but then they add in all the required variables that add that it tells them to add you know because it'll tell them exactly what they're missing.

Once they've added all the required variables and it works it is now the exact prescriptive way that I want them to deploy GitLab which is like oh that's the holy grail right there that is beautiful.

I do like that.

I do like that especially the emphasis you put there on the required variables and that it tells you what you're missing.

For that one thing.

Remember our talking points.

We didn't get through most of them so I'll leave for next week.

One thing I just wanted to point out as it relates to health.

The official help chart repository being deprecated.

There's some interesting discussion brought up this week getting peer you pointed out a bunch of them which is like this is going to reduce the quality of a lot of charts out there because of the lack of automated testing that most people won't implement.

People have different levels of rigor as it relates to maintaining their charts producing the artifacts uploading them not clobbering previous releases.

Basically the ecosystems matter the way their signed packages up there.

There is so many things that go wrong as a result of the official chart repository going away.

Now we all know why that official chart repository sucked.

It was impossible to get a change through because of the levels of bureaucracy and the checks and everything else like that.

So I don't know.

There's a balancing act here.

What I want to point out is one thing is that I think a big part of the success of for example known is that it doesn't require a dedicated repository to manage it.

The fact that you can basically point at any git repository and pull that stuff down and have immediate success I think is awesome.

I think it's a problem that helm has taken this staunch approach that we got to have a managed chart repository which is basically just a glorified HP server.

But yeah why do we need to make it like there are always public source controls and it's been proven to work with NPM.

Why don't we just do something like that.

So the truth is you can using help ins it's non-standard but this is so this is what we've been using now for over a year is the home get a plug-in here that I recommend.

Where is it installed.

So there is a dependency.

This is to install a chart just from a git repo.

Yeah exactly.

We have repo it's this one here.

I'll share it.

I really dig this plugin and it build because I am so good so this plugin withheld file.

You basically have your package.

Jason for 4 Cuban edits you can do also this on the positive I just send in bitch doesn't work with versioning, but you can just specify the URL to the chart.

Which also works you are.

Well, I suppose it's like maybe trying to save Yeah well I get to the tar all but somebody is going to create that will produce that artifact.

Yeah and it works like if you have like a one to one correlation between like GitHub repositories and charts that use the tar ball you were out for the chart and maybe maybe that's actually the answer here is that we should move away from mono repos to poly repos for health charts and then we can just leverage that functionality.

But this is.

Yeah this is imputing up a tarball as a really sort of fact inside your releases.

No no you don't need it because again hub every good hug repo has an automatic Tarbell artifact so it doesn't just go that don't work with the way that would that work with the way that that.

Because when you say helm package like it packages up a turbo in a specific format.

So you would have to have maintain that format on this.

So to say, in the repository it doesn't do anything else so it just zips it up.

So right.

So long as you maintain that format basically the route of your GitHub repo needs to be where the chart became.

Yeah the same format as what it's expecting.

Yeah Yeah.

That could be cool.

I mean it's worked really well for Terraform and tier for modules.

So I think it's probably a good approach for helm charts as well even though so we didn't do that with five possibly charts because we were just emulating what Google was doing originally but now that's going away.

So I guess it's a brave new world.

Well and Google loves model repos too.

Well I don't you know people try to mimic you know their model repo approach and fail at it.

You know there are certain circumstances where it makes sense but you know what makes everything harder.

So let's see here.

So here's an example of a chart.

All you need to do is create a repo called cert like I think maybe to maybe.

So what is it is called Terraform primary provider block.

I think what we should have is probably helping cert manager chart or something or something like that should they.

I think the community should come out with a canonical repository named structure and support that.

Therefore, if you're familiar with the Terraform registry I want to refer back to the x case Casey d.

I posted a few minutes back.

Well there's Maven right.

There's the maiden maybe an object the component library that's also published a prior art in that as well.

A lot of these will have that.

I'm just going to how it works.

I was just going to share this for the terror from module registry that they're very particular on how you name your GitHub repository.

Basically Terraform provider name.

I think we need that for help.

Charts so we get some sanity in the ecosystem.

Is there a similar thing for health hub and how to get something published to help them.

Good question.

It is serious.

It helps set up so you can help people.

So that's a report that gavel in there which works kind of but the quality currently of these reprocess.

Super super super bad.

I mean I get daily notifications of people publishing the same version as the same ditches like this is a mash.

And for me, it's like how can you use that child and not be notified about it and it's still the official help at home which I find super super super risky.

But nobody can have it.

How much does not track source code.

It tracks the package repo where you pub where you push up your tar balls.

Right Yeah.

I just realized Adam I forgot to finish that thought that we were talking about earlier you wanted to see the difference between two versions.

This is what you could integrate with your we've just got to compare.

Yeah here's this is what you could integrate with your solution.

Why am I not getting a diff.

I'm not sure.

Try the other one.

Try not artificially maybe a release will spark some of CJJ.

Images or process to impact mentally.

So they're kind of a reach.

So with you just take on compared and right next to you I didn't get anything they work well we go.

So you compare any two releases of a health chart and see what's changing.

Yeah this would be you bring along something I mean this is a super useful cause like when there's the mondo repos.

There is no you know versions of individual home charts that you can really look at.

So there's a couple of discussions.

I wrote this shot maintainers so there's no enforcement that the shot has to be an open source and I don't have to disclose the source pressing you reproach approach which I find super super stupid because is the asking you or you can just like look at it.

I wrote a recommendation that we implement like files you're in the home that is h like I did with this one Yeah.

But let's see you should probably just get it or you would be interesting if you could somehow partner with hip hop.

It seems like a perfect synthesis of what I'm writing with them but the interest was kind of low.

Really I don't know.

Yeah Yeah.

I wrote a lot of messages for them but the interest seems to be Well I'm kind of there but not really.

Let's see because I would like my opinion like a rapper should be like they should be at least a central not every service like the digits that should be shown upload them but its age should be checked against.

So if you go like at least some kind of checking like it's a great package just being a store to upload them that age and it just changes that you get an alert and they don't see a line.

I agree.

Something like oh Yeah Yeah there's lots of things to do but Yeah let's see whipped cream screen.

I put it there.

What I think is kind of like what should be done but let's see how it goes.

Well all right.

Any last thoughts before we wrap things up today.

I'm starting to get interested in building stuff if anybody like has an initiative going on around that like AWS building.

Yeah all right.

Yeah we should probably dedicate had more time to cost optimization techniques in the US sometime soon.

We pay for courthouse and they have a communities thing that splits open up but we're not billing to different teams because it's more of a let's make sure we don't have a bunch of snapshots that need to get deleted and a bunch of instances that are not right sized.

But I'd also be curious how to apply billing to communities.

It's also all of this stuff to do in get us to.

Yeah Yeah.

OK go.

I'll make a note of that and cover that on our upcoming office hours next week.

All right.

Looks like we've reached the end of the hour and that wraps things up for today.

Thanks again for sharing everything.

I learned a bunch of little cool tricks today.

I'm going to go check out the time tracker thing I always learn so much.

Thank you.

Again the recording will be posted immediately afterwards in the office hours channels.

This is available.

See you guys next week same time, same place all right.

Ms

Public “Office Hours” (2019-12-04)

Erik OstermanOffice Hours

Here's the recording from our “Office Hours” session on 2019-12-04.

We hold public “Office Hours” every Wednesday at 11:30am PST to answer questions on all things DevOps/Terraform/Kubernetes/CICD related.

These “lunch & learn” style sessions are totally free and really just an opportunity to talk shop, ask questions and get answers.

Register here: cloudposse.com/office-hours

Basically, these sessions are an opportunity to get a free weekly consultation with Cloud Posse where you can literally “ask me anything” (AMA). Since we're all engineers, this also helps us better understand the challenges our users have so we can better focus on solving the real problems you have and address the problems/gaps in our tools.

Machine Generated Transcript

Let's get started.

Welcome to both of course, our presents.

December 4th 2019 my name is Eric Osterman I am and I'll be leading the right my screens going nuts.

Now be leading conversation.

I'm CEO and founder of cloud posse where DevOps accelerator we help startups own their infrastructure in record time by building it for you.

And then showing you the ropes.

For those of you new to the call the format of this call is very informal.

My goal is to get your questions answered.

Feel free to unseat yourself at any time if you want to jump, jump in and participate.

We go to these calls every week will automatically post a video of this recording in the office hours channel as well as follow up email.

So you can share it with your team.

You want to share something in private.

Just ask.

And we can temporarily temporarily suspend the recording.

With that said, let's kick this off.

Here's the agenda for today.

First thing is Igor has joined.

He's part of the cloud posse team.

He's been with me for a very long time.

And he's going to share a little bit about the ThoughtWorks.

Technology radar.

This is really cool if you haven't seen it before.

It's a great way to stay on top of what's happening in our industry.

And then if we have some time, we'll go or reinvent announcements some relevant ones that are obvious.

Somebody is a microphone there.

Vincent I guess.

Vincent Yeah.

Good to meet you Vincent.

There we go.

Sorry about that.

All right.

And then let's get started.

So first thing I just want to see if anybody has any pressing questions that you need answered.

Any problems.

This can be related to cloud posse stuff cloud boss terrible modules terraforming general Kubernetes or just general architecture questions related DevOps and cloud.

Yes they'll hear it.

And I trust you to have a good question.

Cinder blocks.

How do you guys kind of do am on Ethiopia's how do we manage our subnet allocations will be another way.

OK Yeah.

Mute my notifications here.

I'm getting slack bond done.

All right.

So yeah.

Suddenly calculations.

We spent a fair bit of time dealing with this.

Now it would've been like a year or two.

But he came back up again earlier this year when we wanted to create a subnet architecture that spanned multiple eight US accounts and was to some degree future proof.

So that we could continue adding accounts.

It's a challenge because if you want to support peering between accounts, and BBC you need to think of that ahead of time.

So the best example, I have for that would be how we're actually doing it right now.

Woops let's get out of fullscreen mode here.

More efficient.

So under the cloud policy reference architectures.

We implement one strategy.

You've implemented the reference architecture.

So you undoubtedly interacted with it.

But you might not have just known it.

Let's see if I remember exactly where that cider so but Terraform provides a bunch of subnet calculation functions where interpretations.

And that's what we used to do.

To do this.

I'm taking I'm guessing because you're advanced at Terraform.

I'm guessing your question might be deeper than this.

What we basically did is we took a large.

I think slash 8.

And then we divided it evenly across the number of potential accounts that we would have.

And then gave each one of those accounts that CIDR block and then within that account, then we further subdivided for the species there in one of the things we did originally that we backtracked or backpedaled on related to subnets in pieces was we used that one BPC for backing services and one BPC for our Kubernetes clusters.

And then we would hear those.

And that would allow us to share that VPC for the banking services for multiple BPC is for different Kubernetes clusters.

But we decided against that because it just obliterated our Siders are available eyepiece because every time you divide by 2.

So we don't do that anymore.

Now we just run one large shared VPC with both banking services and Kubernetes clusters and related security groups.

Any more specific questions.

So I can tailor the answer better for you.

No one knows it yet.

So where do the split four counts is not based on architecture here.

So like for example without the request a block from operations center, or splits from that based on one side.

Yeah So different products may have different needs.

Wait a minute.

You need them art.

Yes because we're stored or data elsewhere.

So just really kind of planning your own it.

Are you guys managing subnet allocations then because you're basically you're saying it spans it spends what you can do an Amazon, it goes to other clouds goes your data center.

It goes to other places.

Yeah Yeah.

Yeah in this area.

I bet maybe somebody else might have some insights as well.

How do they mean.

How do you manage subnet allocations at scale for a larger organization that span multiple clouds.

That goes beyond kind of so Terraform is great from a practitioner perspective.

But how do you do it from like from a management perspective.

Your Craddock perspective, have you seen any software to manage these.

I guess a lot of companies just use ticketing systems.

I think it or hard like an Excel sheet.

Anything Yeah Yeah, I'd love to know if their software that actually does that.

No that doesn't go well.

Yeah, I can't get back to you on that.

Oh, yeah, they've been cool.

Yeah Yeah, I mean, I actually don't really know what's in it, but I mean, we run like nine data centers on our own.

And I know that they're sharing some nights between these data centers.

So Yeah, I think I like my wager is that they still use Excel spreadsheets.

But they're intuitive.

Maybe but I can ask.

All right.

Go, go.

All right.

You were not forgotten.

I just want to get through questions first, and then we'll fill it in with technology radar.

Sure take it.

Let's see.

Any other questions Terraform or IBS related.

I have a question.

Yeah, sure.

All right.

I work in a place where we used a golf cloud account and I'm trying to have a Kubernetes cluster.

But I've been having issues getting to the API because it's a private hosting zone.

Now I don't know what else to do.

It have to instantly be in a server internally.

So I could get my exile URLs to work or do you have an idea of what to do with that in the golf cloud account.

Man I wish I could help you.

We've done nothing with Governor cloud.

However, there's a handful of people in suite ops that do work with Governor cloud.

One of them is the usual is a regular here.

But he's not on the call right now.

So I'm guessing he's that reinvents or something.

So you know I've used gov.

Bobby OK fire away.

So my assumption here is that you're trying to get round 53 to work within gov. cloud.

Yes Yeah.

So within the partitioned region of gov. cloud there is no about 53 service because it wouldn't make sense to have a public partition that is actually private because everything and need to be as Governor cloud as private.

So I mean, there's two notions of zones in rap 53 public and private.

And this would be like a third's you know state.

So they just don't offer it.

So you need a provider for your root account or a different sub account from your public a to bs account hierarchy the organization, your obvious Governor cloud account is parenthood by a public.

Yes or right.

Yes Yes.

So it's within that tree in on the public side that you need to allocate an account.

And then create route 53 resources that match up with your database Governor cloud resources.

Interesting one other question just because I've seen it mentioned by others in the committee are you working specifically with cops by any chance or are you using guess.

Yes cops because I said that I really won't go.

Exactly So yeah.

So do we eat with what do you call it with us.

There is a mode where it uses gossip to discover the nodes and are using gossip mode.

Yes And it's still not working for you.

OK So it creates the cluster.

But I can't get to the mast and no with the DNS provisioning internally.

I still can get to the master.

No What about with just the raw IP address of the master node that you would see through the console or otherwise.

So still it will resolve that I could get to the cluster with the IP address, but generally, we would need a qualified domain name within the cluster.

So yeah, I'm not going to be the best answer that one as he cares or whatever is being used there try to call out the rule 33 each year.

It's actually not really.

I really wonder if I misspoke.

I meant the cops or whatever.

Yes, you know with the cops command you could just use you can attest to DNS zone, which is probably internal in it because it's gusset the base.

I'm just going to use the chaos and chaos.

The locale.

But even with that.

If I tried to log in from a bastion I still can get to it.

I can who is private that would know I'll let me see if I can find somebody on Sui ops that would know the best chances.

Yeah Let's see here, we have a copy channel.

I think we have some I haven't tried to do what you're asking to do.

But this.

I did notice that the distinction between route 53 not being an in region resource definitely made using Terraform harder exactly.

Yeah, I tried it with my Camacho lobbyist count.

It was easy.

It was like a breeze.

Yeah Yeah.

In general, a lot of tools don't support gov. cloud as you've probably figured out right.

It's a region that isn't normally like part of people's suite of continuous integration tests.

So it never gets tested and never gets support that will allow you to beat your unity in the crops channels.

Is the bastion point to point to the right DNS yes and you can look it up manually.

The DNS.

Yeah So lately being is actually is within private also zone.

So any other machine within the VPC in that coastal zone can resolve.

So maybe the answer is you only can use VPN and your cops cluster.

If I am less of it.

I'm not going to use cops do you.

Do you have any idea ideas.

If I don't want to use comms for I believe gravitation journal has AI mean, there are other commercial distributions of Kubernetes.

Gravitational has one that is air gapped.

And I believe works with I think 3s also has air gapped install Yeah.

Gravity the gravitational they just raised a bunch of money to another 30 moons.

That's the business to be in.

Sorry I can't help more.

Well, it raises 25 million.

And I'm not sure because I didn't have experience with governmental zone.

But probably teleport is what you need to access from outside.

Does a cluster.

Well, that's an I think that's a optimization you can't be connected period right now internally using the DNS.

I believe when I use gossip mode with cops it was public cloud and it wasn't this.

And I was able just to use the IP at its all right.

Any So if I hear anything, I'll let you know.

I also reached out in the general channel to see if anybody can help me.

I'll be on the lookout for Google and do post back to you.

I figured it out in the end.

So we keep the knowledge helps.

OK Thank you.

Any other questions.

All right.

Well, you were when you.

Well, let me do a quick intro.

So as we were sharing here we're going to introduce the ThoughtWorks technology radar.

If you haven't seen that already.

Check that out.

That's my tab.

It's taking forever to load here.

Yeah, so if it's top share.

I can continue.

Yeah, right.

So the thought we're technology radar has been going on.

Well ego is going to give an introduction to that period is on the cloud posse team.

And let's treat this as a way to open up the conversation about cool things that they.

Yeah So.

Hello, guys.

A link that Eric provided on his slides is outdated because a few weeks ago, a new technology out there was released.

It's evolving 21 issue.

So I guess you heard about technology radar.

But anyway, I will do a shortly introduction to everyone on the same page.

So the culture 80 is area apart.

It is regulatory.

So from SolidWorks is a company that do software development consulting and.

So they made a report about their opinion where technology goes and what is interesting what.

They have a good experience with what they are looking for and what they suggest to stop using and do it.

And a cool thing why I like it.

And follow is that they are doing this report for 10 years.

And we can look into the past and see what ideas and insights became a mainstream and what was the mistake they doing so report includes a wide spread of technologies.

And today, I will run through on the things that related to drops.

So escape our and light it from 10 machine learning and et cetera.

The real.

We is eager to allow your problems make a list of things we look at.

No, I agree.

You're right.

I was cutting out everything OK.

Is it better.

So for one moment I can last 20 seconds.

OK OK.

So assuming which I like about this report is that when I read in it.

I found that there is a sense, we also do in cloud.

And this is like the whole report is, at least Kay notes to discuss locate and research.

So I hope that you will share your thoughts about different points.

OK so a report can assist f quarters techniques, tools platforms and language and frameworks and today we will quickly run through techniques and feel free to look in nasr quarters. to see if there is some tools you or like articles small notes if you have experience ways or you have any minds and et cetera and, you are welcome to share.

It will be very interesting.

So if you will go to technique.

So rather consist of focus groups in nature.

It is so adopt is a list of things they have a good experience with and suggest to adopting most of the project.

So trial is a list of techniques and tools platforms et cetera that have a good experience ways, but they're still no about the disadvantages and it doesn't fit for most projects.

So assets is as in they are looking for SSSS s that looks promising and hold is a section of a where they provide the list, you should stop doing.

Because if it doesn't looks, as a best practice. so go in to technology.

Section And what what. we use and do in cloud was different way.

So the first one is container security scanning. so really it's now a positive Docker Hub.

And Amazon Container Registry.

And Eric just mentioned before, the call about Aurora.

Right OK.

So kwe has been open source that was harder as it.

This is now ubiquitous in container registries.

So yeah.

So that's is it to adopt and start to use in India everyday projects.

This is it.

So this is a technocrat Dockery which can to find a no one security issues.

And if it's safe say make a mark and you can say so.

Another thing is pipelines for infrastructure as a code we so we have.

So we don't apply changes infrastructure needs.

So I see the, tools like Travis, called Fresh Jenkins et cetera.

We have Atlantis which is two that, follow like around tasks on Paul request opened on to her.

You should have but tasks that perform an actual changes are described as a code and stored in our repository.

So from that point of view, it follows.

This is this pattern for at least a year.

I guess.

And, we can, and the mindset is this is a good practice.

And it is very useful.

And Atlantis is a good tool for such purpose of tasks probably better than most sites today because it gives you an ability to check what changes he will apply to sex.

Nicole and another thing, which is interesting.

And we are trying to adopt it is running costs.

There's a, architecture a fitness function.

So an idea is that, you should monitor the cost that the whole system and different some system goes to the level of port how much it costs you.

What they add to this is just that it didn't stand out to me until right now and I'm reading at the time.

So we've been using cute cost, which is open source open, core kind of communities cost tool behind the scenes and uses Prometheus.

It also works in California.

What's interesting here is what this is pointing out is that you can observe the cost of running services against the value you deliver.

So this gets kind of interesting where if you in Prometheus you have accessibility to bottom line numbers and your business order is sold or you know sign UPS or things like that.

This gets interesting because you can now back that out into what it cost to operate it.

And I think that's what the fitness function here is referring to.

Yeah but so so is this metric consists of two parts a cost of it cost metric and a value metric.

So how to collect fair value metric is business specific with all sorts of previous is just a database like synapse.

So basically, there needs to be Etl or real time you know it basically Prometheus exporter that that ingests that data from whatever source you have.

And then you can have, then you can truly achieve what they're describing here.

The question for me is how to calculate a value of a bitch sir at which point for example.

So cube cost can calculate he can show you how much each service you are running on carbonates for example, or plot cost you another tools that provides such type of information is spotting it, which is like a service, not yet the sales that you can cost, but it do a great trip once it does, but it's hard, but with spot is you couldn't factor in your own your own metrics as part of that, it will show you how much your namespace is in COGS costs.

So in this case, let's say, for example, for a period of time you sold a million.

And for that same period of time your infrastructure cost you know $100,000 operate.

Now you know that you have a 10x return on for every dollar that you're spending on infrastructure is it really an interesting metric in terms of what companies spend on marketing it for example.

I mean, I mean.

OK I don't know.

I mean, for us, I would not care about those architecture across metric.

I know I pays this much a month does is how much your customers.

I get each month month.

But yeah if I look at our customer acquisition costs that we calculate based on our marketing.

I think.

OK Each customer costs us roughly like of euros in marketing.

Yeah So yeah.

So ways it can be interesting.

And we have a Sas product basically, what is your offer.

What is your opex per customer.

And where that gets more interesting is perhaps their services that you're operating that have a very high optics.

But low but relative to the value that you're providing, for example.

So I don't I think it's up to the business ultimately decide.

I think one of the things that's been frustrating for me in this position with infrastructure is that it's always seen as a cost center right.

It's like where money's going.

But you're not showing value.

But you need.

We need we need to get better about showing the value that we're providing as well.

And tying that back out to metrics that the business uses.

So what are those.

I don't know.

Right Yeah points up here.

Let's what are some other interesting ones.

And those are things that we can talk about is a design system and days to provide a collection of design patterns of different components libraries and et cetera.

How you.

So this pops a future development et cetera and it slopes where familiar to what can we do with reference architecture.

That today, Eric showed on answering the question about subnets and end to what we do with these Terraform models.

So this is true for models is a collection of components.

You can like that interact with each other as it may go to make a really does anyone have something similar things that companies like documentation example, I don't know.

Well, let's go present.

So another interesting thing is that this is a binder at the station.

This is an instrument.

They provide a list of them, such as in total.

And Docker notary.

That makes you to make a encryption verification of binary images as ads that authorized for deployment and integrity clear et cetera.

So we had an experience with set yet in a bad way.

So we had a single storage for artifacts of photography images and that gives us guarantee we're on binary.

The same images across all accounts.

And if so, I said the checks and images.

Then it is.

It is like approved.

So by now at the station.

Looks interesting.

In addition to a to z for flow.

Is anybody practicing this here.

Not yet.

I'd love to basically.

I wish you know who I am.

Yeah, I honestly, I I've been trying to figure out how to write up my story without getting into trouble because it would be hugely embarrassing for a number of companies.

But yeah.

Honestly, the fedora people do this door.

Does this a lot of the package.

A lot of the official package registries do that.

But at the Doctor level deploying you're deploying cryptographic sign Docker images is one of the things Kind of here.

We know we should be doing it, but I don't really know anybody who is doing it.

Londo is a londo does it work with there or at least they do it for their Cooper daddies operator.

Like there they have some really wonderful Python framework.

Again, I'm kind of a Python guy.

I don't really know much about go.

But they have that baked in.

In fact, even just to contribute patches you have to have to register your p with an OH Yeah.

I mean, that's Yeah, that's the upside, though, right.

The get.

Well, the thing is that the operator is an image.

In other words.

It's the art.

You basically your artifacts become so yes.

Yes, it would be.

It would be on get upside right.

But they also sign the images they get built, because they publish those things out.

I mean, that's how that's how the code runs is as a Docker image.

Yeah Suzanne and Lynn do that to us.

Oh, yeah, sure.

A lot of sees.

Yeah, it's a funny the name of the repo is a little bit strange.

It's like one of their incubator.

Hang on a second.

I had it right here.

I put it in the kids in the office hours.

OK I sent you.

Well, I will check it later.

Yeah, that's interesting. really interesting.

So there is a dependency to do fitness function.

So an idea is to define as a metric.

How much dependency yourself, do I have.

And see if this is this metric measuring go up or down.

And then they control about complexity based on this function, we didn't have it.

But it's.

Yeah, looks interesting.

It should be easy to implement and crosswords and kind of give observer but observer abilities needs to be changed at which point using an outdated original component or co-sponsored with the drift fitness function is a technique, which is specific evolutionary architecture for its functions to track these dependencies over time work needed.

Is this basically being able to see at a glance how out of date all your stuff is like your Docker images your home releases your packages your stairs relative to all of this et cetera.

Or I don't know.

I think the goal is that when you add something into your self to is you can see how much the dependency adds.

So for example, you don't want to read the Python tools integer days because it's take too much dependence.

And when it's not only you, but you have a like a La ti.

Yes, I did.

That's a metric that gives you what is going on in the project.

If anyone adds something to have a Yeah.

So there's that side, which is kind of as your 8 increasing the surface area of the code you manage.

And then there's also then the drift of all the dependencies in that as they get out of the tech that piles up.

So what I was working against with multi fire like because I mean, it's the idea is to be as up to date as possible and to track that how up how up to date are you dependencies.

That's how I understand it.

Yeah, this is where helm did or was at the helm Def comes in.

What was the modifier of.

Yes Yes.

I'll notify our needs.

Now a net exporter.

Yeah Yeah.

So we can implement the drift fitness function for home.

That's it.

That is interesting, especially, like if he's got an upstream project that you don't know all that much about.

I mean, I was the I hung out with the Python sonic people.

It's a kind of an async web framework.

Lightweight and we're very nice guys.

Honestly, I was just there just really almost like you guys really just like my support group.

And I noticed that their package in FADARA was out of date and it wasn't building properly.

So I kind of like tried to.

I haven't finished yet.

But I've been updating it.

And they're all these weird changes going on in Python with Python 3 and phasing out a lot of Python 2 stuff.

And then all these weird web sockets things.

And it was like, oh, this could get really gnarly and Fedora like they want to use all the latest stuff.

But they still want to support the new things.

And it's like, oh my god, I think the Python 3 Python 2 thing would have exploded any function.

Yeah Yeah.

Well, it was interesting there is also like then what Perl has done with Perl 6.

Now They just renamed or I believe they at least voted to rename and I forget the name they've chosen.

But like Perl 6 was like a totally different language.

Let's not pretend that there's an upgrade path between Perl 5 and 6.

So they just they cut bait and created a new language that's like something for like 15 years.

I was exactly like the butt of a joke.

When I was in college two year in 1999 or whatever I remember talk of Perl 6 back then.

Bringing back memories, man.

OK So are you being serious.

Were Yeah.

In this section, there is it too.

Two points are related to each other and where we can go further.

And there is an interest in it.

So this is a security policy as a code and sidecar is for endpoint security.

So yeah.

Well, when we talk about secrets a policy is a codon our everyday practice.

We usually talk about IAM policies that gives it permission to some applications where we had grenades interact with Amazon and get taxes somewhere and plus a security submits and all that stuff said that manage security.

But here I found it seems that looks interesting to where we can read we can do more with this.

So this is an open policy agent, which is a tool that gives your ability to define different security policies as a code and it integrates with a lot of plot platforms and mesh services like is still on second.

Here is that this correction.

So it supports and why.

Kafka I don't know what that says.

This is the company.

And it looks like an incubator program.

Crowd native foundation.

So it seriously looks promising and really there is a two left to.

Like two fields where.

A lot of new tools. is this is a security end and cost control in a cloud.

So this one, if I have an instrument.

You should look at.

And I related to it is a side.

Carson as at end point security.

So when you use a public cloud and, you you're like you're on services across outside the bomb cloud provider then, you can use the comma sidecar as an endpoint security. and policy agent would gives us stand out.

How you can define a policy across different clouds and environments.

So has anyone experience with something similar.

Oh, how you solve a problem with security cross when you use a public cloud or something like that.

Because we I'm on Amazon on me and me.

They didn't are on public cloud yet.

You see all these demos for the service measures and to show how easy it is then to do the cross cloud networking using the match.

It's all on the flight path.

This is looking up apparently aside cars are now a first class citizen.

I think one of the ideas is it one that Senator Yeah you.

So I know, in the open ship scene there's a lot of discussion around.

Well, I should.

So Argo I guess has is getting some traction among open ship users and the first time I heard about this stuff was just a few days ago where people were asking whether they could use a p.a. to federate.

The author is the authentic the authorization for Argo because the thing about Argo is it's got like its own.

You have to off into Argo and you also have to offer into shift.

And so it's a little bit and an hour goes like really, really not.

It uses decks.

So it's very simple to integrate like Google OAuth or get lap or GitHub super, super simple, but it's still like it's own thing.

So it would be nice to have it somehow federate so that would probably be like maybe in those like CCD apps where you're running like nested administrative domains or maybe that would be a place to do that where you could run.

You could say you know you only have to.

Off into like the parent container.

So to speak, or the parent framework, a little bit like what AWS tries to do with cognate ho me.

So I still kind of what you're saying.

There's a need for something in Kubernetes land which makes it easier to standardize authentication across apps.

Yes and make it all work.

And I agree.

And we've spent a lot on this.

We personally use key quote, but I mean, it's just like everything's just a hack on top of the hack on top of a hack.

And then we keep low.

We use gatekeepers and we deploy gatekeepers for every single service we want to expose.

But even though we do that, all we're doing is providing authenticated way to access the app of which very few apps actually then handle.

Yeah, fine grained access controls the Cuban dashboard is the exception to that where we can actually pass through the roll, and then the grenade dashboard honors that.

But yeah which is more standardized what.

This is the challenge, though, with open source right.

Integrating dozens of technologies, from Def dozens of different vendors of some use Kubernetes and some use something else.

And I'm not holding my breath.

Well, this is what Opie is trying to achieve.

Right I mean, look, I think technically what they are trying to solve and they're doing a great job.

I mean, I know is up to German bond or Dutch bond is implementing their this gig compliance c stuff, which is now and they're pretty successful with it and lots of it's getting a lot of traction in Berlin right now.

That's from what I see.

So OK.

So it works.

It on boys so that I can add on boys a proxy.

So I see that connection point there.

Then let me just put a link in the home office Horace.

OK, cool it degrades with a lot of things.

I mean, I open it at least.

All right.

Well, then that is encouraging.

Yeah Yeah.

So that's all from this section.

If you like this far.

One guys.

I would be happy to continue.

Next problem is to set platforms especially we have experience with some tools and platforms that mentioned set up, for example teleport dependability et cetera and.

And if you choose something that you use.

I've tried.

We can and we can discuss it.

So that is center a much.

And your thank you.

Yeah, thank you for bringing that up.

Well, I think we'll be doing is we'll be pulling some of these topics out and adding them as key points for future office hours as well.

So any other questions, guys be totally unrelated meeting a talking point on things as well.

Reinvent government.

Yeah Yeah Yeah.

Cool Yeah, totally totally.

It's the elephant in the room right now.

Is it obvious reinvent.

So what are some cool announcements there.

Out of the 150 announcements or whatever that they're making.

I feel like it's almost abusive at this point, the number of announcements that they drop on us in one week.

Yeah, I tend to just wait until it's done.

And then start watching them on YouTube.

So I can actually.

I mean, there's lots of means meme going on Twitter right now.

Like that.

Really Yeah.

We could go racing or we need new lines.

We just put like a coat check that this bill line is code per line of code.

It's all like, oh, yeah we'll need new lines.

We just put it all on one line and don't pay anything for the servers.

So related to reinvent but I got I chatted with me of US rep.

Yesterday about the new savings plan stuff.

If you haven't had a chance to look at that.

It's well worth your time.

Basically, it takes the best part of convertible reserve instances and ends rather than having to commit to certain family in numbers of instances in normalized units and do all those calculations.

Especially if you're rapidly changing between the six types and scaling what.

That can be really hard to figure out.

You just commit to $1 per hour amount and it applies across all your compute and cross region, which our eyes don't do.

Our eyes are always region specific.

You can never convert though.

So it's actually pretty cool if you.

It's g.a.

Now as far as I know, they announce beginning in November sometime.

So definitely I would highly recommend looking at that.

Talk to your accountant make sure it makes for your use case.

But I do not see a use case where it's worse.

I haven't yet seen you use case where it's worse to go with the new savings plans for your e to compute rather than reserved instances.

Yeah So just keep in mind there.

When those start expiring.

Yeah, that's interesting also.

Yeah, exactly.

When if your eyes are expiring this is probably the way to go after that.

And there is the savings of that relative to using spot for example.

And how big the delta is between.

All right pricing and spot instances.

So the savings plan pricing for me.

I was looking at partial upfront three year dollar commitment amounts.

And it was roughly it was right.

It was just about 50% off your high demand cost.

Nice So pretty nice.

The obviously, it's pretty comparable to our eyes.

Some places I saw was more percentage of the on demand and it was never below that I saw.

So as I said, I don't see an instance where it's worse to use to gain the flexibility of the savings plan stuff.

They only have four easy to compute, you can't use our eyes for art.

Yes, they have a lot harder job tackling RDX because they have to deal with software licensing for SQL and Oracle and all that crap.

And then they also have they don't have it for like a to cash instances yet either like they have our eyes for it.

So those you still have to go through our eyes.

But the savings plan.

Now I hope they announce something it RDS or reinvent or they're just like, yeah.

We're going to expand this to RDS and the massive cash this year because I'd rather just say, yeah, I'm going to spend 30 grand a month on my compute m 20 grand.

RDX please give me a discount on that guaranteed spend and go from there.

Yeah Cool.

Yeah, they bring that up.

I forgot to bring up the thing about savings plans.

That's in pretty good cost optimization.

Trick that they introduced just jumping back and.

I'm pretty sure we'll be talking more about reinvent next week, since we're already up on at the end of the hour here.

Any other big announcements obviously casts far dating a big one.

And then the week before last just previously last week, whatever was the fully managed node pulls for you casts both have Terraform support already, which is cool, because that means that Amazon is working directly with Tashfeen corp to get this stuff ready for all announcements to fill in.

Thanks so much.

Good question.

I don't know if that was a joke and fuku and CloudFormation.

But yeah sometimes even CloudFormation lags behind right.

So be interesting terraforming usually nice sometimes.

Usually that is interesting.

Yeah, let me know, if any of you find that out posted back in office hours it'd be interesting to find out.

I don't know if he or.

Actually, this is if any of you guys want to monkey around with like cut some of that cost stuff.

I have certainly for the next couple of months.

I have pretty much unsupervised access to a bunch of demo servers and resources in similar logic.

So we can do all sorts of aggregation of logs and metrics and bye.

Thanks for.

Thanks for extending the offer blades.

You guys can hit plays up on the officers too.

Yeah, just exactly.

Yeah All right, well, then we.

That brings us to the end of the hour.

And that wraps things up.

Thanks, everyone for sharing ego, especially for taking time to prepare the notes on the technology radar.

I always learn so much from these calls and a recording of this call is going to be posted in the office hours channel.

See you guys next week.

Same place same time, guys too.