Wednesday, October 30, 2024

LXD/LXC networking a static crossover directly connected NIC

 For the life of me I couldn't find this out there on the net.  I looked at countless videos and read blogs but to no avail.

The task seemed easy, I have two physical hosts with dual-port 25GbE NICs.  I only have a 10GbE switch but want to take advantage of the faster link speed so why not use a DAC and just static IP the interfaces?!

Here is the setup:

srv1 & srv2 hosts are set up identically:

eth0_10gbe (NIC) --> br0_10gbe (host bridge) 10.10.0.0/24

eth1_25gbe (NIC) --> br1_25gbe (host bridge) 10.25.0.0/24

I then setup LXD to use the 10GbE bridge for networking using the unmanaged (by lxd) host bridge by using the lxd bridge nic.  This allows the DHCP request to be passed to my LAN DHCP server (Ubiquity UDMPro) for that particular segment (vlan tagged network).  This was relatively easy and there are a bunch of tutorials and things since this is rather standard.  

In Ubuntu I used netplan to add the VLANs to the host bridge and I use the subsequent "sub interface" in the container.  Said differently, UDMPro sends untagged traffic to the port, the NIC puts that in the host bridge which then breaks out the VLANs, then LXC containers are attached to the VLAN interfaces.

The NIC direct connected between hosts was super simple to configure for the host.  I was scping and stuff at 25GbE in mins.  Then trying to get LXC containers to do the same thing proved a little more challenging.

I eventually settled on creating a bridge (as stated above) and adding that to the container as eth1 using LXC's bridging nictype:

lxc config device add <container_name> eth1 nic nictype=bridged parent=br1_25gbe

I then need to go into the container OS and manually configure an IP.

Once of the LXD folks wrote a blog about some of the other kind of networking and includes code for doing manual static IP addressing from an LXC profile which is likely what I'll do next... all of my LXC containers are provisioned via terraform and then configured via ansible so maybe I'll go a different route (pun intended) but not sure yet.

But why though? This is all to allow for some of the containerized services to use the 25GbE network as a backhaul connection for their cluster traffic.

Tuesday, October 29, 2024

LXD (now Canonical's) a journey into the simplistically complex

The history of LXD is not part of this post...

LXC (Linux Containers) were not the goal.  I have experience with OpenStack and wanted to go that route but the hardware requirements and overhead for my particular project were so small (just a few different services) that someone on my team said "I'm just gonna install them all on my mac and be done with it.

I couldn't let that go, I needed some CI/CD and config management, I needed repeatable code based infrastructure!

I first found Canonical's MicroCloud and was kinda sold!  But their hardware requirement was min of three like servers and we just had two like servers.  So I did flirt with setting up LXD then putting a multi node MicroCloud cluster in it on VMs but that seemed a little crazy.

So I started with one host, configured LXD and started building containers with terraform and ansible.  It worked so well I was convinced that this was the right path forward.

The hardware are some Ryzen 5 boxes with max RAM and a few SSDs and a dual port 25GbE NIC.  One port from the NICs are direct attached to each other (no switching) and the other is into a 10GbE switch since that is what we have.

I configured each LXC server as a remote of the other and added the remote using the 25GbE network so copy/move operations would be max speed.

I had some trouble getting that part working since each time I would follow the steps to add the remote using the trust cert I would get an error.  No amount of googling found the answer so I tried to post on the LXD forums at Canonical... that was a waste of time since I couldn't even post and there was a disclaimer about "not for tech support" bla bla... so Reddit for the win!

I created a post "LXD to LXD host on one NIC, everything else on another?" in the LXD subreddit and within mins I was off and going!

I can run lxc commands from my laptop connected wifi and see both remotes and issue commands, it's wicked simple once you get it all setup and memorize all the CLI commands ;) 

I opted to have the 10GbE connected to the switch getting all the untagged VLAN traffic then use a linux bridge to pass the VLANs as networks into the containers.  I kinda followed the OpenStack pattern from the old days but also found Trevor Sullivan's YouTube Tutorials super helpful as well as the OG PM of LXD Stéphane Graber's YouTube and Site very helpful!

I might add more details in the future, but my hope is if anyone else is looking for some LXD/LXC servers that are not clustered and hit some of these road blocks maybe this will shorten their search a little.

Tuesday, September 10, 2024

MySQL dumps from cloudSQL, transform to csv, then load infile locally

I got some databases I need to migrate from one server to another.  We opted for the good old fashion `mysqldump`

So far it is wicked slow to import one of the tables, while working through the solution I needed to transform the data from the dump format of insert statements per line to a csv file.  Yeah, I could have just done the export again into this new format but why bother with all that and the network egress fees again, lets just transform what we have.

I asked ChatGPT to come up with some script to do this for me... that script failed :(

I ended up basically just settling on the good old fashion grep & sed commands:

time grep "INSERT INTO" /path/to/file_src.sql | sed -e 's/^[^(]*[(]//' -e 's/),(/\n/g' -e 's/);$//' > /path/to/file_dest.csv

so with this one command you consider only the lines with INSERT commands and you remove the front part something like: "INSERT INTO `table` ("

You also "split" and make new lines on "),(" which is the "boundary" between all the rows

Then we trim the end ");"

So in this one-liner we do basically all we need... if you have hundreds of gigs of tables like me it's faster to run this than re-export it all over the net.

We then needed to import it so I manually took the create table part of the dump, added it into another file and wrapped it in a loop.  I experimented with additional MySQL tuning bits and bobs over the iterations and have no idea if it is helping or not.

Find more and the scripts in this gist https://gist.github.com/ivanlawrence/c3fdbdcab0a34df714f0361f5f55e721

Friday, March 22, 2024

Google Workspace Routing: gmail pro catch-all routing

 I like to do this dumb thing where I use unique email addresses per service.  It started to thwart organizations from selling my email address, but now everything has been breached it has a unique benefit that I don't have the same username at every site so searching through my password manager is fast and also having someone try to login as `microsoft@example.com` but on google has nothing to do with my real account.

Well, I've moved my DNS registrar and name servers around a bit now that Google Domains has been sold.  Up shot is CloudFlare Domain Name Registration is a little cheaper and their API is very easy to deal with if you wanna do a roll your own Dynamic DNS service or something.

When I moved DNS some of my emails started to bounce :(

Looks like there was a gmail default route for "Google Domains Email Forwarding" but I couldn't find anything about it in the docs Set up Default routing for your organization which I think I was using before but now was no longer able to route since I wasn't using Google Domains?

The changes I made were for All Recipients > Envelope recipient > Change envelope recipient > Recipient username > my_username

When tested this seemed to give the result I wanted where the previously bounced emails like `foo@example.com` now was getting sent to `my_username@example.com`.

I thought I would share it here in case I forget and need to do it again.  I have other default routes for other domains and one of them adds a recipient instead of changing the recipient and that kinda dirties the email headers (more than they are now) but it's also a way.