test
Some checks failed
Build and Push Docker Image / build (push) Failing after 31s

This commit is contained in:
mcbtaguiad
2026-01-19 21:31:03 +08:00
commit ec77f4121f
2499 changed files with 1106308 additions and 0 deletions

View File

@@ -0,0 +1,36 @@
name: Build and Push Docker Image
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
workflow_dispatch:
# push:
# branches:
# - '**'
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Log in to Gitea Container Registry
run: |
echo "${{ secrets.TOKEN }}" | docker login \
${{ secrets.SERVER }} \
-u ${{ secrets.USERNAME }} \
--password-stdin
- name: Build Docker image
run: |
IMAGE="${{ secrets.SERVER }}/${{ github.repository }}:${{ steps.branch.outputs.branch }}"
docker build -t $IMAGE .
- name: Push Docker image
run: |
IMAGE="${{ secrets.SERVER }}/${{ github.repository }}:${{ steps.branch.outputs.branch }}"
docker push $IMAGE

1
CNAME Executable file
View File

@@ -0,0 +1 @@
git.tagsdev.click

29
Dockerfile Executable file
View File

@@ -0,0 +1,29 @@
FROM docker.io/ubuntu:22.04 as builder
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y hugo
WORKDIR /site
COPY ./app/ .
RUN hugo
FROM docker.io/nginx:1.25.5-bookworm
WORKDIR /app
COPY --from=builder /site/public/ .
COPY ./nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
# FROM docker.io/httpd:latest
# COPY --from=builder /srv/jekyll/_site/ /usr/local/apache2/htdocs/
# COPY --from=builder /site/public/* /usr/local/apache2/htdocs/

1
README.md Executable file
View File

@@ -0,0 +1 @@
marktaguiad.dev

0
app/.hugo_build.lock Executable file
View File

6
app/archetypes/default.md Executable file
View File

@@ -0,0 +1,6 @@
---
title: "{{ replace .Name "-" " " | title }}"
date: {{ .Date }}
draft: true
---

34
app/archived/bulan.md Normal file
View File

@@ -0,0 +1,34 @@
---
title: "Bulan"
date: 2022-09-25T09:33:01+08:00
#draft: true
author: "Mark Taguiad"
subject: 'Bulan ko'
tags: ["luna", "bulan"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 1
TocOpen: false
---
# Bulan
Manila, Philippines\
Sunday, 25 September 2022
<!-- ![Alt text](/images/bulan/starmap.png "starmap") -->
<!-- [![imagen](/images/bulan/starmap.png)](/images/bulan/starmap.png) -->
![starmap](http://chevereto.marktaguiad.dev/images/2024/05/20/starmap.png)
[Lubong](https://moonlanding.marktaguiad.dev) nga haan nak maranyagan ti silaw mun.
<!-- Para kin jay maymaysa nga tao nga parmi nga bilbilibak ken ayayatek. Dim ammu kasano kapateg ti naited mo kanyak tapno sapulek manen ti kalkalikagumak.
Ayan man makadanunan ta, ana man ti pagbalinan ta. Ibatbatiyan ka ti parte ditoy pusok. Adadtoy nak lang mangbuybuya jay langit, baring ton maminsan kadwa ka ditoy nga mangkitkitan ken jay *bulan*. -->
<!--To that one person who never ceases to astonish me. You have given me the courage to chase my dreams again.
Wherever we end up in the world and whatever we become. Adadtoy nak lang mangbuybuya --!>

164
app/archived/cv.md Normal file
View File

@@ -0,0 +1,164 @@
---
layout: post
author: "Mark Taguiad"
title: "Mark's CV"
date: "2023-03-03"
#description: "Mark Taguiad | Resume"
# tags: ["mark", "job", "experience", "cv"]
#categories: ["themes", "syntax"]
#aliases: ["migrate-from-jekyl"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 1
draft: false
margin-left: 2cm
margin-right: 2cm
margin-top: 1cm
margin-bottom: 2cm
keywords:
- 'k8s'
- 'container'
- 'linux'
- 'python'
subject: 'CV'
---
# Mark Christian Taguiad
- <marktaguiad@marktaguiad.dev>
- [marktaguiad.dev](https://marktaguiad.dev/)
- Manila, Philippines
Ambitious IT professional skilled in both Linux/Unix Administration and DevOps. Experienced with different
technologies that involves various systems and development. Maintains a professional attitude while
concentrating on developing technical capabilities.
### Skills
```linux systems administration```
```network administration```
```programming```
```databases```
```devops```
```kubernetes```
```containers```
```webservers```
```git```
```cicd```
```iac```
```proactive monitoring```
**Programming**: Python, Perl, Bash
**Databases**: MySQL, Postgres, sqlite
**Linux**: Alpine, Oracle, Ubuntu, Debian, Arch, OpenSuse
**DevOps**: Ansible, Kubernetes, Podman/Docker, CI/CD, Terraform, IaC
### Experience
### <span>DevOps Engineer, Samsung R&D Institute Philippines (SRPH)</span>
<span><span>October 2023 - Present </span>
- Active monitoring using Prometheus-Grafana Stack.
- Develop, coordinate and administrate Kubernetes infrastructure.
- Creating CI pipeline for integration, build, vulnerability scan, code quality scan and unit test.
- Create and monitor Kubernetes manifest in CD pipeline (ArgoCD).
- Migration of in-house application (Java-Maven/Gradle) to Docker/Container and Kubernetes.
- Develop API server (python-flask) for testing API endpoint.
- Troubleshooting production environment issues/bugs.
- Creating PoC on new technology related to container and container orchestration.
- Creating alarm and jobs; monitoring in Prometheus and Alertmanager.
- Manage and create Virtual Machine in Openstack.
- Building/creating RPM and QCOW2 images.
- Jira ticket resolution for customer production issues.
- Linux system administration.
- Bash/Python scripting.
### <span>DevOps Engineer, Quick Suite Trading / Computer Voice Systems</span>
<span>March 2023 - October 2023 </span>
- Active monitoring using Prometheus-Grafana Stack.
- Develop, coordinate and administrate Kubernetes infrastructure.
- Created CI pipeline for package build, vulnerability scan and code quality scan.
- Manage and create Virtual Machine in Openstack.
- Building/creating RPM and QCOW2 images.
- VM provision in Proxmox using Terraform/Opentofu.
- Automating server configuration/setup using Ansible.
- Jira ticket resolution for customer production issues.
- Migration of in-house application to Docker/Container and Kubernetes.
- Troubleshooting production environment issues/bugs.
- Creating PoC on new technology related to container orchestration.
- Creating alarm and jobs; monitoring in Prometheus and Alertmanager.
- Linux system administration.
- Bash/Python scripting.
### <span>System Engineer, Amkor Technology</span>
<span>Nov 2021 -- Mar 2023</span>
- Develop, coordinate and administrate Kubernetes infrastructure.
- Spearheaded the development of CI/CD pipeline of all in-house/opensource projects.
- Developed, build docker image of ProcessMaker, and deployed in production environment.
- Setup development and production environment (Docker) for the developer team.
- Setup, configured, and maintained Zabbix infrastructure monitoring tool (containerized) on multiple sites.
- Automate most of Zabbix (agent deployment, housekeeping) task using ansible automation.
- Created scripts (Perl Zabbix Compatible) for monitoring Software AG webMethods.
- Migrated GoAnywhere MFT running on Windows Server to run on a Linux Server (Oracle Linux 8).
- Deployed, configured, and maintained GoAnywhere MFT (SFTP/FTP) and GoAnywhere Gateway (DMZ)
on multiple sites.
- Deployed, configured, and maintained Tibco Spotfire running alongside Apache NIFI on multiple sites.
- Setup, configured, and maintained Redhat JBOSS in development and production environment.
- Performed standard administration task such as OS installation, troubleshooting, problem resolution,
package installation, software upgrade and system hardening.
- Automate most of the basic task such as system hardening, backup, housekeeping.
- Storage management: cleanup, mount, backup and extend.
- Worked with Developer, DBA, Network Team to resolve their daily issues.
- Wrote shell/python/perl scripts for various system task such as agent deployment, backup systems,
installation, and monitoring.
- Performed troubleshooting, incident management and resolve day to day problem raised by users and
customer.
### <span>Network Operations Center Engineer, Amkor Technology</span>
<span>Jul 2021 -- Nov 2021</span>
- Responsible for proactively monitoring server using such tools as Zabbix, Hobbit Xymon, PRTG and
Jennifer APM.
- Responsible for proactively monitoring network devices using SolarWinds.
- IBM Application System/400; resources, error message and data integrity monitoring.
- Initiating and resolving incident tickets.
- Manage daily and weekly database backup.
- Generate weekly availability reports of servers and network devices.
### Education
### <span>Mapua University</span>
<span>2012 -- 2019</span>
- Bachelor of Science, Electronics Engineering
### <span>Cisco Networking Academy</span>
<span>2018</span>
- Routing and Switching
- Security
### Licenses & Certifications
### <span>Cisco Certified Network Associate (CCNA) - Cisco</span>
<span>Issued June 2021 - Expires June 2024</span>
- CSCO14020527
[Click here to download](/documents/mark-christian-taguiad-resume.pdf)

View File

@@ -0,0 +1,22 @@
---
title: "Moon Landing - A Space Exploration"
date: 2024-04-28
author: "Mark Taguiad"
tags: ["bulan"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
# Moon Landing - A Space Exploration
Found this on my git archived; this was supposed to be (metaphorically) our space exploration to the mysteries of the Moon to the endless expanse of the galaxy. Our journey to uncover lunar secrets and seeks answer to the universe's greatest questions (ours).
But we couldn't find hope in fatalism. If the future is predetermined, then the here and now becomes all the more precious. Embracing the beauty and significance of what we hold, we could had found hope even in the midst of uncertainty. You don't choose to live or love someone because it assured things will work out in the end, but because the alternate is, well returning to nothing.
Let this serve as a timeless reminder; amidst what happened, the moon's beauty endures as do you and our memories.
Tumayaben jay rocket'n (umuna nakun), 3..2..1.. [LIFT OFF!](https://moonlanding.marktaguiad.dev)

116
app/config.toml Normal file
View File

@@ -0,0 +1,116 @@
baseURL = "https://marktaguiad.dev/"
languageCode = "en-us"
title = "marktaguiad.dev"
theme = "cactus"
copyright = "marktaguiad.dev"
disqusShortname = "marktaguiad.dev"
googleAnalytics = "G-XCHH9NNNBX"
# summaryLength = 2
# Main menu which appears below site header.
[[menu.main]]
name = "Home"
url = "/"
weight = 1
[[menu.main]]
name = "Blog"
url = "/post"
weight = 2
# [[menu.main]]
# name = "Tags"
# url = "/tags"
# weight = 3
[[menu.main]]
name = "Gallery"
url = "https://gallery.marktaguiad.dev"
weight = 4
[[menu.main]]
name = "Whoami"
url = "/post/mark-cv"
weight = 5
[markup]
[markup.tableOfContents]
endLevel = 4
ordered = true
startLevel = 2
[markup.highlight]
codeFences = true
guessSyntax = false
hl_Lines = ""
lineNoStart = 1
lineNos = true
lineNumbersInTable = false
noClasses = true
style = "dracula"
tabWidth = 4
[markup.goldmark]
[markup.goldmark.renderer]
unsafe = true
[params]
enableGiscus = true
#colortheme = "white" # dark, light, white, or classic
rss = true # generate rss feed. default value is false
googleAnalyticsAsync = false # use asynchronous tracking. Synchronous tracking by default
showAllPostsArchive = false # default
# Home page settings
description = 'This blog is my learning-by-doing stash, where I dump notes from my sysadmin life and homelab setup.'
mainSections = "posts" # your main section
mainSectionsGallery = "gallery" # your main section
showAllPostsOnHomePage = false # default
postsOnHomePage = 5 # this option will be ignored if showAllPostsOnHomePage is set to true
tagsOverview = true # show tags overview by default.
tagsOverviewGallery = false
showProjectsList = false # show projects list by default (if projects data file exists).
projectsUrl = "https://github.com/mcbtaguiad" # title link for projects list
# https://gohugo.io/functions/format/#hugo-date-and-time-templating-reference
dateFormat = "2006-01-02" # default
# Post page settings
show_updated = true # default
showReadTime = true # default
mainSectionTitle = "Blog"
mainSectionTitleGallery = "Gallery"
[[params.social]]
name = "github"
link = "https://github.com/mcbtaguiad"
[[params.social]]
name = "linkedin"
link = "https://www.linkedin.com/in/mark-christian-taguiad/"
[[params.social]]
name = "email"
link = "marktaguiad@marktaguiad.dev"
[[params.social]]
name = "strava"
link = "https://www.strava.com/athletes/123512498/"
[params.giscus]
data_repo="mcbtaguiad/marktaguiad.dev"
data_repo_id="R_kgDONaJdcw"
data_category="General"
data_category_id="DIC_kwDONaJdc84CxJdw"
data_mapping="pathname"
data_strict="0"
data_reactions_enabled="1"
data_emit_metadata="0"
data_input_position="bottom"
data_theme="preferred_color_scheme"
data_lang="en"
crossorigin="anonymous"

182
app/config.yaml.papermod Executable file
View File

@@ -0,0 +1,182 @@
baseURL: "https://tagsdev.click/"
title: TagsDev
paginate: 5
theme: cactus
enableInlineShortcodes: true
enableRobotsTXT: true
buildDrafts: false
buildFuture: false
buildExpired: false
enableEmoji: true
pygmentsUseClasses: true
googleAnalytics: G-XCHH9NNNBX
minify:
disableXML: true
# minifyOutput: true
languages:
en:
languageName: ":en:"
languageAltTitle: English
weight: 2
title: TagsDev
profileMode:
enabled: true
title: Mark Taguiad
imageUrl: "https://raw.githubusercontent.com/mcbtaguiad/web-tagsdev-hugo/main/app/static/images/tags-black.jpg"
imageTitle: Mark Taguiad
imageWidth: 300
imageHeight: 300
subtitle: "devops | philomath"
buttons:
- name: k8s
url: "https://dashboard.tagsdev.click"
- name: cv
url: cv/
menu:
main:
- name: Blog
url: post/
weight: 1
#- name: Archive
# url: archives
# weight: 2
- name: Search
url: search/
weight: 3
- name: Tags
url: tags/
weight: 4
- name: Luna
url: "https://www.imcollectingmoonlight.com/"
weight: 5
outputs:
home:
- HTML
- RSS
- JSON
params:
env: production # to enable google analytics, opengraph, twitter-cards and schema.
description: "Tagsdev | Devops"
author: Mark Taguiad
defaultTheme: auto
# disableThemeToggle: true
ShowShareButtons: false
ShowReadingTime: true
#ShareButtons: ["linkedin", "reddit"]
# disableSpecial1stPost: true
displayFullLangName: true
ShowPostNavLinks: true
ShowBreadCrumbs: true
ShowCodeCopyButtons: true
ShowRssButtonInSectionTermList: true
ShowToc: true
# comments: false
#images: ["papermod-cover.png"]
profileMode:
enabled: false
title: Tagsdev
imageUrl: "#"
imageTitle: my image
# imageWidth: 120
# imageHeight: 120
buttons:
- name: Archives
url: archives
- name: Tags
url: tags
socialIcons:
- name: github
url: "https://github.com/mcbtaguiad/"
- name: Spotify
url: "https://open.spotify.com/playlist/0D1gr5jrtvcM0K6c40hAzO?si=079b83f114dd4360"
- name: Email
url: "mailto:marktaguiad@tagsdev.click"
- name: Linkedin
url: "https://www.linkedin.com/in/mark-christian-taguiad/"
editPost:
URL: "https://github.com/mcbtaguiad/web-tagsdev-hugo/tree/main/app/content"
Text: "Suggest Changes" # edit text
appendFilePath: true # to append file path to Edit link
# label:
# text: "Home"
# icon: icon.png
# iconHeight: 35
# analytics:
# google:
# SiteVerificationTag: "XYZabc"
assets:
disableHLJS: true
# favicon: "<link / abs url>"
# favicon16x16: "<link / abs url>"
favicon32x32: "https://raw.githubusercontent.com/mcbtaguiad/web-tagsdev-hugo/main/app/static/images/favicon.ico"
# apple_touch_icon: "<link / abs url>"
# safari_pinned_tab: "<link / abs url>"
# cover:
# hidden: true # hide everywhere but not in structured data
# hiddenInList: true # hide on list pages and home
# hiddenInSingle: true # hide on single page
# fuseOpts:
# isCaseSensitive: false
# shouldSort: true
# location: 0
# distance: 1000
# threshold: 0.4
# minMatchCharLength: 0
# keys: ["title", "permalink", "summary", "content"]
markup:
goldmark:
renderer:
unsafe: true
highlight:
noClasses: false
# anchorLineNos: true
# codeFences: true
# guessSyntax: true
# lineNos: true
# style: monokai
privacy:
vimeo:
disabled: false
simple: true
twitter:
disabled: false
enableDNT: true
simple: true
instagram:
disabled: false
simple: true
youtube:
disabled: false
privacyEnhanced: true
services:
instagram:
disableInlineCSS: true
twitter:
disableInlineCSS: true

View File

@@ -0,0 +1,37 @@
---
layout: post
author: "Mark Taguiad"
title: "Samsung Summer Outing"
date: "2024-06-04"
tags: ["beach", "summer"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 1
draft: false
margin-left: 2cm
margin-right: 2cm
margin-top: 1cm
margin-bottom: 2cm
---
# Samsung Summer Outing
Location: Laiya, Batangas
![1000007947](http://chevereto.marktaguiad.dev/images/2024/08/08/1000007947.jpg)
![1140321]( http://chevereto.marktaguiad.dev/images/2024/08/08/_1140321.jpg)
![1000007534](http://chevereto.marktaguiad.dev/images/2024/08/08/1000007534.jpg)
![1000007512](http://chevereto.marktaguiad.dev/images/2024/08/08/1000007512.jpg)
![11403012](http://chevereto.marktaguiad.dev/images/2024/08/08/_11403012.jpg)
![1000007537]( http://chevereto.marktaguiad.dev/images/2024/08/08/1000007537.jpg)
![1000007536]( http://chevereto.marktaguiad.dev/images/2024/08/08/1000007536.jpg)

View File

@@ -0,0 +1,33 @@
---
title: "Dingalan, Aurora Suka Ride"
date: 2025-06-15
author: "Mark Taguiad"
tags: ["ride", "cycling"]
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
[![imagen](/images/cycling/aurora-001.jpg)](/images/cycling/aurora-001.jpg)
**Route:** <a href="/route/Aurora.gpx" download="Aurora.gpx">
Aurora.gpx
</a>
<!-- [Aurora.gpx](/route/Aurora.gpx){ download } -->
**Distance:** 198Km
**Elevation Gain:** 1129m
**Ride ID:** [link](https://strava.app.link/RtBNP7Bz1Zb)
**After Ride Thoughts:** Betsy - my bike got its first major gasgas. Got over it eventually, but got bonk real hard along the ride. Original plan was to loop the route but Bede also got bonked - ending nagbus pauwi. Before going home, we climbed a route going to Dingalan overviewing, weirdly enough the mountain is flooded (bumabaha ata, or tubig lang na bumababa sa bundok). Buti nalang masarap yung food and fresh, and natulog malala sa bus pauwi.
**To ride it again?:** Definitely!, but will be in Baler.
{{< imageviewer images="/images/cycling/aurora-002.jpg,/images/cycling/aurora-003.jpg,/images/cycling/aurora-004.jpg,/images/cycling/aurora-005.jpg,/images/cycling/aurora-006.jpg,/images/cycling/aurora-007.jpg,/images/cycling/aurora-008.jpg,/images/cycling/aurora-009.jpg,/images/cycling/aurora-010.jpg,/images/cycling/aurora-011.jpg,/images/cycling/aurora-012.jpg,/images/cycling/aurora-013.jpg,/images/cycling/aurora-014.jpg,/images/cycling/aurora-015.jpg,/images/cycling/aurora-016.jpg" >}}

View File

@@ -0,0 +1,39 @@
---
title: "Infanta (Batman) Loop"
date: 2025-03-29
author: "Mark Taguiad"
tags: ["ride", "cycling"]
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
[![imagen](/images/cycling/batman-00.png)](/images/cycling/batman-00.png)
**Route:** <a href="/route/batman_loop.gpx" download="batman_loop.gpx"> batman_loop.gpx </a>
**Distance:** 254Km
**Elevation Gain:** 3750m
**Ride ID:** [link](https://www.strava.com/activities/13735804494)
**After Ride Thoughts:** Slept almost a day just to recover. Had a lot of fun, first ever ride with [Padayon Cycling Club](https://www.facebook.com/search/top?q=padayon%20and%20friends) Bede, Keon, Karl and Marc. I underestimate Sierra Madre going to the mountains of Infanta, Quezon, since I already have a lot of experience climbing Sierra Madre. Well I am to blame since I didn't check the total elevation gain of the route and also mid ride, the group decided to change route from just Infanta Arch (90KM) to making it into a loop. The pain and suffering was worth-it, the scenery was enough for me to keep going (don't know about the group though, but I just know we felt the same hahaha, cyclist are a bunch of M :) ).
**To ride it again?:** If reverse, why not!
{{< imageviewer images="/images/cycling/batman-001.jpg,/images/cycling/batman-002.jpg,/images/cycling/batman-003.jpg,/images/cycling/batman-004.jpg,/images/cycling/batman-005.jpg,/images/cycling/batman-006.jpg,/images/cycling/batman-007.jpg,/images/cycling/batman-008.jpg" >}}
<!-- [![imagen](/images/cycling/batman-001.jpg)](/images/cycling/batman-001.jpg)
[![imagen](/images/cycling/batman-002.jpg)](/images/cycling/batman-002.jpg)
[![imagen](/images/cycling/batman-003.jpg)](/images/cycling/batman-003.jpg)
[![imagen](/images/cycling/batman-004.jpg)](/images/cycling/batman-004.jpg)
[![imagen](/images/cycling/batman-005.jpg)](/images/cycling/batman-005.jpg)
[![imagen](/images/cycling/batman-006.jpg)](/images/cycling/batman-006.jpg)
[![imagen](/images/cycling/batman-007.jpg)](/images/cycling/batman-007.jpg)
[![imagen](/images/cycling/batman-008.jpg)](/images/cycling/batman-008.jpg)
[![imagen](/images/cycling/batman-009.jpg)](/images/cycling/batman-009.jpg) -->

View File

@@ -0,0 +1,200 @@
---
title: "Chevereto - Install Notes"
date: 2024-05-22
author: "Mark Taguiad"
tags: ["image-hosting", "k8s", "self-hosted", "docker"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
Moved and hosted all media used by my website [here](https://chevereto.marktaguiad.dev/).
Issues or error encountered using chevereto official docker image. Using nfs-csi, might not encounter this issue in using different CSI.
1. Permission error accessing /var/www/html/
```bash
# kube exec to the pod
$ chown -R www-data:www-data /var/www/html
```
2. Cannot create folder /var/www/html/images/_assets/
```bash
# kube exec to the pod
$ mkdir -p /var/www/html/images/_assets/
$ chown -R www-data:www-data /var/www/html/images/
```
config.yaml
```yaml
apiVersion: v1
data:
CHEVERETO_DB_HOST: db_host
CHEVERETO_DB_USER: root
CHEVERETO_DB_PASS: verystrongpassword
CHEVERETO_DB_PORT: '3306'
CHEVERETO_DB_NAME: chevereto
CHEVERETO_HOSTNAME: chevereto.marktaguiad.dev
CHEVERETO_HOSTNAME_PATH: /
CHEVERETO_HTTPS: '0'
CHEVERETO_ASSET_STORAGE_TYPE: local
CHEVERETO_ASSET_STORAGE_URL: http://chevereto.marktaguiad.dev/images/_assets/
CHEVERETO_ASSET_STORAGE_BUCKET: /var/www/html/images/_assets/
CHEVERETO_MAX_POST_SIZE: 2G
CHEVERETO_MAX_UPLOAD_SIZE: 2G
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
app: chevereto
name: chevereto-config
```
deploy.yaml
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: chevereto
name: chevereto
spec:
replicas: 1
selector:
matchLabels:
app: chevereto
strategy:
type: Recreate
template:
metadata:
labels:
app: chevereto
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 15
preference:
matchExpressions:
- key: core
operator: In
values:
- "4"
- weight: 10
preference:
matchExpressions:
- key: core
operator: In
values:
- "3"
# - weight: 10
# preference:
# matchExpressions:
# - key: kubernetes.io/role
# operator: In
# values:
# - 'worker'
- weight: 5
preference:
matchExpressions:
- key: disk
operator: In
values:
- "ssd"
containers:
- image: chevereto/chevereto:4.1.4
name: chevereto
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /var/www/html/images/
subPath: data
name: chevereto-data
envFrom:
- configMapRef:
name: chevereto-config
initContainers:
- name: volume-permission
image: ghcr.io/chevereto/chevereto:4.1.4
command:
- sh
- -c
- "mkdir -p /var/www/html/images/_assets/ && chown -R www-data:www-data /var/www/html/images/"
volumeMounts:
- name: chevereto-data
subPath: data
mountPath: /var/www/html/images/
securityContext:
runAsUser: 0
restartPolicy: Always
volumes:
- name: chevereto-data
persistentVolumeClaim:
claimName: chevereto-pvc
status: {}
```
service.yaml
```yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: chevereto
name: chevereto
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
app: chevereto
status:
loadBalancer: {}
```
ingress.yaml
```yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: chevereto-ingress
annotations:
# kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/proxy-body-size: 1000m
spec:
ingressClassName: nginx
tls:
- hosts:
- chevereto.marktaguiad.dev
secretName: chevereto-tls
rules:
- host: chevereto.marktaguiad.dev
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: chevereto
port:
number: 80
```

View File

@@ -0,0 +1,113 @@
---
title: "Reverse Tunneled Proxy with Cloudflared"
date: 2026-01-03
author: "Mark Taguiad"
tags: ["cloudflare", "docker", "network"]
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
If you bought your domain in Cloudflare and broke just like me, then you can enjoy some of the free privileges like cloudflared which can tunnel your application to the cloud. This also handles TLS certificate and renewal.
# Table of Contents
1. [Requirements](#requirements)
2. [Server Setup](#server-setup)
3. [HTTPS Proxy Route](#https-proxy-route)
4. [SSH Proxy Route](#ssh-proxy-route)
### Requirements
A domain in Cloudflare and a server who has access in the internet.
### Server Setup
Navigate to your [dashboard](https://dash.cloudflare.com/), click on Zero Trust - Networks - Connectors. Now create tunnel, select type as Cloudflared. Depending on the system you're using, but in my case I will be selecting Docker. For now copy the token, like we did in Pangolin setup, we need to create a externel Docker network.
`docker network create cloudflared-proxy`
*compose.yml*
```
services:
cloudflared:
image: cloudflare/cloudflared:latest
container_name: cloudflared
restart: unless-stopped # Restart the container unless manually stopped
# Logging configuration for Cloudflare Tunnel container
logging:
driver: json-file # Use the default json-file logging driver
options:
max-size: 100m # Maximum log file size before rotation (100 MB)
max-file: "10"
healthcheck:
test:
- CMD
- cloudflared
- --version
# Check if cloudflared version command works
interval: 30s # Time between health check attempts
timeout: 10s # Time to wait for a response
retries: 3 # Number of retries before marking as unhealthy
start_period: 10s # Delay before health checks begin
command: tunnel --no-autoupdate run --token someverylongsecrettoken
networks:
- cloudflared-proxy
networks:
cloudflared-proxy:
name: cloudflared-proxy
external: true
```
Spin the container and make sure it is running in the background. Check the status on your dashboard if the tunnel is now healthy.
[![imagen](/images/cloudflared-docker/cloudflared-001.png)](/images/cloudflared-docker/cloudflared-001.png)
### HTTPS Proxy Route
Now configure on the tunnel, navigate on Published application routes. Using the jellyfin application from Pangolin setup, below is a sample configuration.
[![imagen](/images/cloudflared-docker/cloudflared-002.png)](/images/cloudflared-docker/cloudflared-002.png)
### SSH Proxy Route
Just like in the previous configuration, in here just put type to SSH. But first you need to install cloudflared on your PC or terminal. Check this [link](https://github.com/cloudflare/cloudflared) for available installation method. Onced installed, run the login command `cloudflared login`. This will automatically redirect you to your cloudflared dash to authenticate.
[![imagen](/images/cloudflared-docker/cloudflared-003.png)](/images/cloudflared-docker/cloudflared-003.png)
Configure your ssh config.
*.ssh/config*
```
Host yourserver-ssh.yourdomain.com
ProxyCommand cloudflared access ssh --hostname %h
User yourUser
IdentityFile ~/.ssh/id_rsa
ServerAliveInterval 240
```
Now you can ssh to your server using Cloudflare tunnel.
`ssh root@yourserver-ssh.yourdomain.com`
*Optional: If you haven't created or generated your ssh keys and config*
```
ssh-keygen -t rsa -b 4096
touch ~/.ssh/config
```
To copy your public key to your server.
`ssh-copy-id UserName@yourserverIPorDNS`

View File

@@ -0,0 +1,170 @@
---
title: "Bonding Ethernet"
date: 2026-01-05
author: "Mark Taguiad"
tags: ["linux", "ubuntu", "network"]
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
Install note bonding two ethernet, working as backup active load-balancing. Also a reminder how careless sometimes when configuring network. Make sure to buy a serial cable in case you messed up your config. Messing up my homelab has become a very dangerous hobby of mine, but weirdly enough I take pride and joy in it (evil laugh).
### Setup
If you are configuring this on a SSH client, then better be prepared when you lost connection due to miss configuration. Make sure you have a serial cable or if your server has a physical interface.
Identify your network interface card. For my setup these are *enp5s0* and *eno1*.
```
$ ip show link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp5s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
link/ether 00:30:64:5c:e2:4b brd ff:ff:ff:ff:ff:ff
3: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
link/ether d6:98:ed:47:ea:de brd ff:ff:ff:ff:ff:ff permaddr 00:30:64:5c:e2:4a
altname enp0s25
# OR
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 00:30:64:5c:e2:4b brd ff:ff:ff:ff:ff:ff
3: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
link/ether d6:98:ed:47:ea:de brd ff:ff:ff:ff:ff:ff permaddr 00:30:64:5c:e2:4a
altname enp0s25
```
If you are using Ubuntu then you are probably using networkd. For some reason you changed your networking with NetworkManager then I can help you (hahaha kidding, madness! I'm talking to my future self who forgot to do this shits).
First check current config of your system, you probably configured your server to be in DHCP mode. For networkd check in /etc/network, for NetworkManager you can use `nmtui or nmcli` to easily check existing config. Some server also configured its config in /etc/netplan.
Once you deleted (make backup) the current config, dont restart the network service yet (I know you are using SSH), create a config /etc/netplan/01-bonding.yaml.
*01-bonding.yaml*
```
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
enp5s0:
dhcp4: false
dhcp6: false
bonds:
bond0:
interfaces: [enp5s0, eno1]
parameters:
mode: balance-alb #active-backup
#primary: enp5s0
mii-monitor-interval: 100
addresses:
- 192.168.1.69/24
#gateway4: 192.168.1.1
routes:
- to: default
via: 192.168.1.1
nameservers:
addresses: [1.1.1.1, 8.8.8.8]
```
Easy right, depending on your use case, for this setup I've set it to balance-alb. For reference here are the other mode (copy pasted from the internet).
1. Active-Backup (mode=1)
Behavior: Only one slave is active. Backup takes over if the active fails.
Switch requirement: None
Use case: Simple failover, compatible with any switch.
2. Balance-rr (mode=0) Round-robin
Behavior: Packets are sent in round-robin order across all slaves.
Switch requirement: None, but may cause out-of-order packets.
Use case: Simple load balancing across multiple NICs.
3. Balance-xor (mode=2) XOR policy
Behavior: Selects slave based on MAC addresses (source XOR dest).
Switch requirement: Must support 802.3ad or static config.
Use case: Load balancing with predictable path selection.
4. 802.3ad (mode=4) LACP (Link Aggregation)
Behavior: Uses LACP protocol to combine links.
Switch requirement: Switch must support LACP.
Use case: True link aggregation with load balancing and redundancy.
5. Balance-tlb (mode=5) Adaptive transmit load balancing
Behavior: Transmit only, uses load on each slave to balance.
Switch requirement: None
Use case: Good for outgoing traffic load balancing
6. Balance-alb (mode=6) Adaptive load balancing
Behavior: Includes TLB + receive load balancing (requires ARP negotiation).
Switch requirement: None
Use case: Both transmit and receive load balancing
Now you can apply the config.
```
# verify config, it temporary apply but can rollback if problem exist.
# Sometimes it fail, prepare for the worst :)
$ netplan try
$ netplan apply
$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v6.8.0-90-generic
Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eno1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0
Slave Interface: enp5s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:30:64:5c:e2:4b
Slave queue ID: 0
Slave Interface: eno1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:30:64:5c:e2:4a
Slave queue ID: 0
```

View File

@@ -0,0 +1,10 @@
---
layout: post
author: "Mark Taguiad"
title: "Hello, World"
description: "Nagrigat gayam agbiyag ditoy lubong!"
date: "1996-03-29"
tags: ["me"]
---
Nagrigat gayam agbiyag ditoy lubong!

View File

@@ -0,0 +1,227 @@
---
title: "Homepage - A highly customizable dashboard for docker and kubernetes cluster"
date: 2024-04-24
author: "Mark Taguiad"
tags: ["self-hosted", "docker", "k8s", "dashboard"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
<!-- ![Alt text](/images/homepage/homepage.png) -->
[![imagen](/images/homepage/homepage.png)](/images/homepage/homepage.png)
<!-- ![homepage](http://chevereto.marktaguiad.dev/images/2024/08/31/homepage.png) -->
Looking for flashy and dynamic dashboard to organized your websites and self-hosted application running on your cluster/server? Checkout [homepage](https://github.com/benphelps/homepage/tree/main)!
### Homepage Core Features
- Docker integration
- Kubernetes integration
- Service Integration
- Various widgets
### Experience with Homepage
It's easy to install and configure, with docker you may need to mount the config but with kubernetes it can be configured by using config maps. This has been my [dashboard](https://dashboard.marktaguiad.dev) for quite sometime now and every websites and application deployed is added.
It has a quick integration using annonation in ingress, here is a sample. With this example, this application/website is added automatically to group `Links`.
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tagsdev-hugo-ingress
annotations:
gethomepage.dev/description: "TagsDev | Mark Taguaid"
gethomepage.dev/enabled: "true"
gethomepage.dev/group: Links
gethomepage.dev/icon: https://raw.githubusercontent.com/mcbtaguiad/web-tagsdev-hugo/main/app/static/images/fa-tags-nobg.png
gethomepage.dev/name: TagsDev
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- marktaguiad.dev
secretName: tagsdev-hugo-tls
rules:
- host: marktaguiad.dev
http:
paths:
- path: /
#pathType: ImplementationSpecific
pathType: Prefix
backend:
service:
name: web-tagsdev
port:
number: 8080
```
### Homepage with Docker
Installing it is easy! Just use docker-compose/podman-compose.
```yaml
version: "3.3"
services:
homepage:
image: ghcr.io/benphelps/homepage:latest
container_name: homepage
ports:
- 3000:3000
volumes:
- /path/to/config:/app/config # Make sure your local config directory exists
- /var/run/docker.sock:/var/run/docker.sock:ro # (optional) For docker integrations
```
### Homepage with Kubernetes
Use the unofficial helm chart: https://github.com/jameswynn/helm-charts/tree/main/charts/homepage
```sh
helm repo add jameswynn https://jameswynn.github.io/helm-charts
helm install my-release jameswynn/homepage
```
Or use my kube deploy files.
deployment.yaml
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: homepage
namespace: web
labels:
app.kubernetes.io/name: homepage
spec:
revisionHistoryLimit: 3
replicas: 1
strategy:
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: homepage
template:
metadata:
labels:
app.kubernetes.io/name: homepage
spec:
serviceAccountName: homepage
automountServiceAccountToken: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
containers:
- name: homepage
image: ghcr.io/gethomepage/homepage:latest
imagePullPolicy: Always
securityContext:
privileged: true
ports:
- name: http
containerPort: 3000
protocol: TCP
volumeMounts:
- name: homepage-config
mountPath: /app/config
- name: logs
mountPath: /app/config/logs
volumes:
- name: homepage-config
configMap:
name: homepage
- name: logs
emptyDir:
{}
```
service.yaml
```yaml
apiVersion: v1
kind: Service
metadata:
name: homepage
namespace: web
labels:
app.kubernetes.io/name: homepage
annotations:
spec:
type: ClusterIP
ports:
- port: 3000
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: homepage
```
serviceaccount.yaml
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: homepage
namespace: web
labels:
app.kubernetes.io/name: homepage
secrets:
- name: homepage
```
clusterrole.yaml
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: homepage
labels:
app.kubernetes.io/name: homepage
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
- nodes
verbs:
- get
- list
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- apiGroups:
- metrics.k8s.io
resources:
- nodes
- pods
verbs:
- get
- list
```
clusterrolebinding.yaml
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: homepage
labels:
app.kubernetes.io/name: homepage
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: homepage
subjects:
- kind: ServiceAccount
name: homepage
namespace: web
```

View File

@@ -0,0 +1,170 @@
---
title: "Automating Kubernetes Cluster Setup with Ansible"
date: 2025-07-09
author: "Mark Taguiad"
tags: ["ansible", "kubeadm"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
Over the years, Ive found myself repeatedly setting up Kubernetes clusters using kubeadm—and while it works well, the manual process can get repetitive and error-prone. Thats why I built [kubeadm-ansible](https://github.com/mcbtaguiad/kubeadm-ansible): an Ansible playbook that automates the entire process of standing up a Kubernetes cluster.
This project was born out of my desire for a simple, reusable way to deploy multi-node clusters quickly—especially in test environments, homelabs, and lightweight production setups.
# Table of Contents
1. [What It Does](#what-it-does)
2. [How to Use It](#how-to-use-it)
3. [Why I Built This](#why-i-built-this)
4. [Final Thoughts](#final-thoughts)
### What It Does
kubeadm-ansible simplifies Kubernetes provisioning by:
1. Installing all required packages (docker, kubeadm, etc.)
2. Networking and firewall setup.
3. Initializing the control plane
4. Joining master to existing cluster.
5. Joining worker nodes.
6. Installing a network plugin (Calico)
7. Supporting both Ubuntu and CentOS
### How to Use It
Clone the repo:
```
git clone https://github.com/mcbtaguiad/kubeadm-ansible.git
cd kubeadm-ansible
```
Update your inventory file at inventory/host.yaml with the IPs or hostnames of your master and worker nodes. Then run:
Single Master Cluster
```
all:
hosts:
master1:
ansible_ssh_host: 10.0.0.1
worker1:
ansible_ssh_host: 10.0.0.2
worker2:
ansible_ssh_host: 10.0.0.3
master:
hosts:
master1:
ansible_ssh_host: 10.0.0.1
worker:
hosts:
worker1:
ansible_ssh_host: 10.0.0.2
worker2:
ansible_ssh_host: 10.0.0.3
```
Multi Master Cluster
Note: Need at least 3 master nodes for high availability cluster
```
all:
hosts:
master1:
ansible_ssh_host: 10.0.0.1
master2:
ansible_ssh_host: 10.0.0.2
master3:
ansible_ssh_host: 10.0.0.3
worker1:
ansible_ssh_host: 10.0.0.4
worker2:
ansible_ssh_host: 10.0.0.5
master:
hosts:
master1:
ansible_ssh_host: 10.0.0.1
master2:
ansible_ssh_host: 10.0.0.2
master3:
ansible_ssh_host: 10.0.0.3
worker:
hosts:
worker1:
ansible_ssh_host: 10.0.0.4
worker2:
ansible_ssh_host: 10.0.0.5
```
Init cluster
`ansible-playbook playbook/kubeadm_init.yaml -i inventory/hosts.yaml`
Thats it. In just a few minutes, youll have a functional Kubernetes cluster ready to go. Kube Config file is generated as **admin.yaml** in the current directory.
It can also be customized to add new master or worker node to an existing cluster. Use this inventory file as reference.
Add Master Node
```
all:
hosts:
existing_master:
ansible_ssh_host: 10.0.0.1
new_master:
ansible_ssh_host: 10.0.0.2
master:
hosts:
existing_master:
ansible_ssh_host: 10.0.0.1
new_master:
ansible_ssh_host: 10.0.0.2
```
Add Worker Node
```
all:
hosts:
existing_master:
ansible_ssh_host: 10.0.0.1
new_worker:
ansible_ssh_host: 10.0.0.2
master:
hosts:
existing_master:
ansible_ssh_host: 10.0.0.1
master:
hosts:
new_worker:
ansible_ssh_host: 10.0.0.2
```
Run playbook.
`$ ansible-playbook playbook/add_node.yaml -i inventory/host.yaml`
### Why I Built This
There are a lot of Kubernetes provisioning tools out there—but many are complex or overkill for smaller environments. I wanted something:
1. Easy to maintain
2. Transparent (no black boxes)
3. Fully Ansible-based for idempotency and clarity
4. Flexible enough to tweak for custom needs
### Final Thoughts
If you're looking to spin up a Kubernetes cluster without diving into the weeds every time, I hope kubeadm-ansible saves you as much time as it's saved me. Contributions and feedback are always welcome—feel free to fork it, open issues, or submit PRs.
Check it out: [github.com/mcbtaguiad/kubeadm-ansible](https://github.com/mcbtaguiad/kubeadm-ansible)
If you are using k3s instead of kubeadm, check this similar [repo.](https://github.com/mcbtaguiad/k3s-ansible)

View File

@@ -0,0 +1,693 @@
---
title: "Provision libvirt Multiple VM with Terraform/Opentofu"
date: 2025-07-05
author: "Mark Taguiad"
tags: ["libvirt", "qemu", "vm", "cloud-init", "kvm", "terafform", "opentofu"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false`
---
### Background
In this post we'll be using libvirt provisioner with Terraform/Opentofu to deploy mulitple KVM Virtual Machine.
# Table of Contents
1. [Dependencies](#install-dependencies)
2. [Add Permission to user](#add-permission-to-user)
3. [Terraform init](#terraform-init)
4. [Variable and Config](#variable-and-config)
4. [Terraform plan](#terraform-plan)
5. [Terraform apply](#terraform-apply)
6. [Verify VM](#verify-vm)
### Install Dependencies
```
sudo dnf install libvirt virt-install virsh -y
# enable and start libvirtd service
sudo systemctl enable --now libvirtd
```
Verify if the host is can now run guest machine.
`virt-host-validate`
Download cloud-init image. For this example we'll be using Ubuntu.
`wget https://cloud-images.ubuntu.com/noble/20250523/noble-server-cloudimg-amd64.img`
### Add Permission to user
Add user to libvirt group to manage VM without using sudo.
`sudo adduser $USER libvirt`
If you'll be accessing the host remotely, make sure to add your ssh key to the host.
`ssh-copy-id user@server-ip`
### Terraform init
Define the provider, we'll be using proivder by [dmacvicar/libvirt](https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs).
Create the directory and files.
`touch main.tf providers.tf terraform.tfvars variables.tf`
Define the provider.
*main.tf*
```
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
```
If you're running `terraform` on the host use;
`uri = "qemu:///system"`
If you're running `terraform` remotely. Change username and IP.
`uri = "qemu+ssh://root@192.168.254.48/system"`
*providers.tf*
```
provider "libvirt" {
#uri = "qemu:///system"
uri = "qemu+ssh://root@192.168.254.48/system"
}
```
Save the files and initialize Opentofu. If all goes well, the provider will be installed and Opentofu has been initialized.
```
$ tofu init
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of dmacvicar/libvirt from the dependency lock file
- Reusing previous version of hashicorp/template from the dependency lock file
- Using previously-installed dmacvicar/libvirt v0.8.3
- Using previously-installed hashicorp/template v2.2.0
│ Warning: Additional provider information from registry
│ The remote registry returned warnings for registry.opentofu.org/hashicorp/template:
│ - This provider is deprecated. Please use the built-in template functions instead of the provider.
OpenTofu has been successfully initialized!
You may now begin working with OpenTofu. Try running "tofu plan" to see
any changes that are required for your infrastructure. All OpenTofu commands
should now work.
If you ever set or change modules or backend configuration for OpenTofu,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
```
### Variable and Config
Before we execute terraform plan, let us first define some variables and config.
Under img_url_path; it is where the cloud-init image downloaded earlier.
For vm_names, it is defined in array-meaning it will create VM depending on how many are defined in the array. In this example it will create **master** and **worker** VM.
*variables.tf*
```
variable "img_url_path" {
default = "/home/User/Downloads/noble-server-cloudimg-amd64.img"
}
variable "vm_names" {
description = "vm names"
type = list(string)
default = ["master", "worker"]
}
# This is optional, if you want to create volume pool
variable "libvirt_disk_path" {
description = "path for libvirt pool"
default = "/mnt/nvme0n1/kvm-pool"
}
```
For minimal setup, let set the user and password to `root` and `password123`.
`cloud_init.cfg`
```
ssh_pwauth: True
chpasswd:
list: |
root:password123
expire: False
```
Also for network, it will just be using the default network and will get IP from dhcp. For other configuration check this [link1](https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs/resources/network), [link2](https://wiki.libvirt.org/VirtualNetworking.html#routed-mode-example).
`network_config.cfg`
```
version: 2
ethernets:
ens3:
dhcp4: true
```
### Terraform plan
Let's now create the VM.
`main.tf`
```
terraform {
required_version = ">= 0.13"
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.3"
}
}
}
resource "libvirt_volume" "k8s-cloudinit" {
count = length(var.vm_names)
name = "${var.vm_names[count.index]}"
pool = "kvm-pool"
source = var.img_url_path
format = "qcow2"
}
data "template_file" "user_data" {
template = file("${path.module}/cloud_init.cfg")
}
data "template_file" "network_config" {
template = file("${path.module}/network_config.cfg")
}
# for more info about paramater check this out
# https://github.com/dmacvicar/terraform-provider-libvirt/blob/master/website/docs/r/cloudinit.html.markdown
# Use CloudInit to add our ssh-key to the instance
# you can add also meta_data field
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
user_data = data.template_file.user_data.rendered
network_config = data.template_file.network_config.rendered
}
# Create the machine
resource "libvirt_domain" "domain-k8s" {
count = length(var.vm_names)
name = var.vm_names[count.index]
memory = "2048"
vcpu = 2
cloudinit = libvirt_cloudinit_disk.commoninit.id
network_interface {
network_name = "default"
}
# IMPORTANT: this is a known bug on cloud images, since they expect a console
# we need to pass it
# https://bugs.launchpad.net/cloud-images/+bug/1573095
console {
type = "pty"
target_port = "0"
target_type = "serial"
}
console {
type = "pty"
target_type = "virtio"
target_port = "1"
}
disk {
volume_id = libvirt_volume.k8s-cloudinit[count.index].id
}
graphics {
type = "spice"
listen_type = "address"
autoport = true
}
}
```
Save the file and we can run Opentofu plan command.
```
$ tofu plan
data.template_file.network_config: Reading...
data.template_file.network_config: Read complete after 0s [id=b36a1372ce4ea68b514354202c26c0365df9a17f25cd5acdeeaea525cd913edc]
data.template_file.user_data: Reading...
data.template_file.user_data: Read complete after 0s [id=69a2f32bd20850703577ebc428d302999bc1b2e11021b1221e7297fef83b2479]
OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
OpenTofu will perform the following actions:
# libvirt_cloudinit_disk.commoninit will be created
+ resource "libvirt_cloudinit_disk" "commoninit" {
+ id = (known after apply)
+ name = "commoninit.iso"
+ network_config = <<-EOT
version: 2
ethernets:
ens3:
dhcp4: true
EOT
+ pool = "default"
+ user_data = <<-EOT
#cloud-config
# vim: syntax=yaml
#
# ***********************
# ---- for more examples look at: ------
# ---> https://cloudinit.readthedocs.io/en/latest/topics/examples.html
# ******************************
#
# This is the configuration syntax that the write_files module
# will know how to understand. encoding can be given b64 or gzip or (gz+b64).
# The content will be decoded accordingly and then written to the path that is
# provided.
#
# Note: Content strings here are truncated for example purposes.
ssh_pwauth: True
chpasswd:
list: |
root:password123
expire: False
EOT
}
# libvirt_domain.domain-k8s[0] will be created
+ resource "libvirt_domain" "domain-k8s" {
+ arch = (known after apply)
+ autostart = (known after apply)
+ cloudinit = (known after apply)
+ emulator = (known after apply)
+ fw_cfg_name = "opt/com.coreos/config"
+ id = (known after apply)
+ machine = (known after apply)
+ memory = 2048
+ name = "master"
+ qemu_agent = false
+ running = true
+ type = "kvm"
+ vcpu = 2
+ console {
+ source_host = "127.0.0.1"
+ source_service = "0"
+ target_port = "0"
+ target_type = "serial"
+ type = "pty"
}
+ console {
+ source_host = "127.0.0.1"
+ source_service = "0"
+ target_port = "1"
+ target_type = "virtio"
+ type = "pty"
}
+ cpu (known after apply)
+ disk {
+ scsi = false
+ volume_id = (known after apply)
+ wwn = (known after apply)
}
+ graphics {
+ autoport = true
+ listen_address = "127.0.0.1"
+ listen_type = "address"
+ type = "spice"
}
+ network_interface {
+ addresses = (known after apply)
+ hostname = (known after apply)
+ mac = (known after apply)
+ network_id = (known after apply)
+ network_name = "default"
}
+ nvram (known after apply)
}
# libvirt_domain.domain-k8s[1] will be created
+ resource "libvirt_domain" "domain-k8s" {
+ arch = (known after apply)
+ autostart = (known after apply)
+ cloudinit = (known after apply)
+ emulator = (known after apply)
+ fw_cfg_name = "opt/com.coreos/config"
+ id = (known after apply)
+ machine = (known after apply)
+ memory = 2048
+ name = "worker"
+ qemu_agent = false
+ running = true
+ type = "kvm"
+ vcpu = 2
+ console {
+ source_host = "127.0.0.1"
+ source_service = "0"
+ target_port = "0"
+ target_type = "serial"
+ type = "pty"
}
+ console {
+ source_host = "127.0.0.1"
+ source_service = "0"
+ target_port = "1"
+ target_type = "virtio"
+ type = "pty"
}
+ cpu (known after apply)
+ disk {
+ scsi = false
+ volume_id = (known after apply)
+ wwn = (known after apply)
}
+ graphics {
+ autoport = true
+ listen_address = "127.0.0.1"
+ listen_type = "address"
+ type = "spice"
}
+ network_interface {
+ addresses = (known after apply)
+ hostname = (known after apply)
+ mac = (known after apply)
+ network_id = (known after apply)
+ network_name = "default"
}
+ nvram (known after apply)
}
# libvirt_volume.k8s-cloudinit[0] will be created
+ resource "libvirt_volume" "k8s-cloudinit" {
+ format = "qcow2"
+ id = (known after apply)
+ name = "master"
+ pool = "kvm-pool"
+ size = (known after apply)
+ source = "/home/mcbtaguiad/Downloads/noble-server-cloudimg-amd64.img"
}
# libvirt_volume.k8s-cloudinit[1] will be created
+ resource "libvirt_volume" "k8s-cloudinit" {
+ format = "qcow2"
+ id = (known after apply)
+ name = "worker"
+ pool = "kvm-pool"
+ size = (known after apply)
+ source = "/home/mcbtaguiad/Downloads/noble-server-cloudimg-amd64.img"
}
Plan: 5 to add, 0 to change, 0 to destroy.
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so OpenTofu can't guarantee to take exactly these actions if you run "tofu apply" now.
```
### Terraform apply
After plan-review the output summary of terraform plan, we can now create the VM.
```
$ tofu apply
data.template_file.network_config: Reading...
data.template_file.network_config: Read complete after 0s [id=b36a1372ce4ea68b514354202c26c0365df9a17f25cd5acdeeaea525cd913edc]
data.template_file.user_data: Reading...
data.template_file.user_data: Read complete after 0s [id=69a2f32bd20850703577ebc428d302999bc1b2e11021b1221e7297fef83b2479]
OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
OpenTofu will perform the following actions:
# libvirt_cloudinit_disk.commoninit will be created
+ resource "libvirt_cloudinit_disk" "commoninit" {
+ id = (known after apply)
+ name = "commoninit.iso"
+ network_config = <<-EOT
version: 2
ethernets:
ens3:
dhcp4: true
EOT
+ pool = "default"
+ user_data = <<-EOT
#cloud-config
# vim: syntax=yaml
#
# ***********************
# ---- for more examples look at: ------
# ---> https://cloudinit.readthedocs.io/en/latest/topics/examples.html
# ******************************
#
# This is the configuration syntax that the write_files module
# will know how to understand. encoding can be given b64 or gzip or (gz+b64).
# The content will be decoded accordingly and then written to the path that is
# provided.
#
# Note: Content strings here are truncated for example purposes.
ssh_pwauth: True
chpasswd:
list: |
root:password123
expire: False
EOT
}
# libvirt_domain.domain-k8s[0] will be created
+ resource "libvirt_domain" "domain-k8s" {
+ arch = (known after apply)
+ autostart = (known after apply)
+ cloudinit = (known after apply)
+ emulator = (known after apply)
+ fw_cfg_name = "opt/com.coreos/config"
+ id = (known after apply)
+ machine = (known after apply)
+ memory = 2048
+ name = "master"
+ qemu_agent = false
+ running = true
+ type = "kvm"
+ vcpu = 2
+ console {
+ source_host = "127.0.0.1"
+ source_service = "0"
+ target_port = "0"
+ target_type = "serial"
+ type = "pty"
}
+ console {
+ source_host = "127.0.0.1"
+ source_service = "0"
+ target_port = "1"
+ target_type = "virtio"
+ type = "pty"
}
+ cpu (known after apply)
+ disk {
+ scsi = false
+ volume_id = (known after apply)
+ wwn = (known after apply)
}
+ graphics {
+ autoport = true
+ listen_address = "127.0.0.1"
+ listen_type = "address"
+ type = "spice"
}
+ network_interface {
+ addresses = (known after apply)
+ hostname = (known after apply)
+ mac = (known after apply)
+ network_id = (known after apply)
+ network_name = "default"
}
+ nvram (known after apply)
}
# libvirt_domain.domain-k8s[1] will be created
+ resource "libvirt_domain" "domain-k8s" {
+ arch = (known after apply)
+ autostart = (known after apply)
+ cloudinit = (known after apply)
+ emulator = (known after apply)
+ fw_cfg_name = "opt/com.coreos/config"
+ id = (known after apply)
+ machine = (known after apply)
+ memory = 2048
+ name = "worker"
+ qemu_agent = false
+ running = true
+ type = "kvm"
+ vcpu = 2
+ console {
+ source_host = "127.0.0.1"
+ source_service = "0"
+ target_port = "0"
+ target_type = "serial"
+ type = "pty"
}
+ console {
+ source_host = "127.0.0.1"
+ source_service = "0"
+ target_port = "1"
+ target_type = "virtio"
+ type = "pty"
}
+ cpu (known after apply)
+ disk {
+ scsi = false
+ volume_id = (known after apply)
+ wwn = (known after apply)
}
+ graphics {
+ autoport = true
+ listen_address = "127.0.0.1"
+ listen_type = "address"
+ type = "spice"
}
+ network_interface {
+ addresses = (known after apply)
+ hostname = (known after apply)
+ mac = (known after apply)
+ network_id = (known after apply)
+ network_name = "default"
}
+ nvram (known after apply)
}
# libvirt_volume.k8s-cloudinit[0] will be created
+ resource "libvirt_volume" "k8s-cloudinit" {
+ format = "qcow2"
+ id = (known after apply)
+ name = "master"
+ pool = "kvm-pool"
+ size = (known after apply)
+ source = "/home/mcbtaguiad/Downloads/noble-server-cloudimg-amd64.img"
}
# libvirt_volume.k8s-cloudinit[1] will be created
+ resource "libvirt_volume" "k8s-cloudinit" {
+ format = "qcow2"
+ id = (known after apply)
+ name = "worker"
+ pool = "kvm-pool"
+ size = (known after apply)
+ source = "/home/mcbtaguiad/Downloads/noble-server-cloudimg-amd64.img"
}
Plan: 5 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
OpenTofu will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
libvirt_volume.k8s-cloudinit[1]: Creating...
libvirt_cloudinit_disk.commoninit: Creating...
libvirt_volume.k8s-cloudinit[0]: Creating...
libvirt_cloudinit_disk.commoninit: Creation complete after 4s [id=/var/lib/libvirt/images/commoninit.iso;ecbb0a27-be52-435c-a5db-c0e87b58fd3a]
libvirt_volume.k8s-cloudinit[1]: Still creating... [10s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [10s elapsed]
libvirt_volume.k8s-cloudinit[1]: Still creating... [20s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [20s elapsed]
libvirt_volume.k8s-cloudinit[1]: Still creating... [30s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [30s elapsed]
libvirt_volume.k8s-cloudinit[1]: Still creating... [40s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [40s elapsed]
libvirt_volume.k8s-cloudinit[1]: Still creating... [50s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [51s elapsed]
libvirt_volume.k8s-cloudinit[1]: Still creating... [1m0s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [1m1s elapsed]
libvirt_volume.k8s-cloudinit[1]: Still creating... [1m10s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [1m11s elapsed]
libvirt_volume.k8s-cloudinit[1]: Still creating... [1m20s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [1m21s elapsed]
libvirt_volume.k8s-cloudinit[1]: Still creating... [1m30s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [1m31s elapsed]
libvirt_volume.k8s-cloudinit[1]: Still creating... [1m40s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [1m41s elapsed]
libvirt_volume.k8s-cloudinit[1]: Creation complete after 1m43s [id=/mnt/nvme0n1/kvm-pool/worker]
libvirt_volume.k8s-cloudinit[0]: Still creating... [1m51s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [2m1s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [2m11s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [2m21s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [2m31s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [2m41s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [2m51s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [3m1s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [3m11s elapsed]
libvirt_volume.k8s-cloudinit[0]: Still creating... [3m21s elapsed]
libvirt_volume.k8s-cloudinit[0]: Creation complete after 3m30s [id=/mnt/nvme0n1/kvm-pool/master]
libvirt_domain.domain-k8s[1]: Creating...
libvirt_domain.domain-k8s[0]: Creating...
libvirt_domain.domain-k8s[0]: Creation complete after 3s [id=ecdfd825-9912-4876-86d4-50cfc883101e]
libvirt_domain.domain-k8s[1]: Creation complete after 3s [id=9d27f366-6165-4e32-bebe-785a4b1cc75e]
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
```
### Verify VM
```
$ virsh list --all
Id Name State
------------------------------
1 master running
2 worker running
```

164
app/content/post/mark-cv.md Normal file
View File

@@ -0,0 +1,164 @@
---
layout: post
author: "Mark Taguiad"
title: "Mark's CV"
date: "2023-03-03"
#description: "Mark Taguiad | Resume"
tags: ["job", "cv"]
#categories: ["themes", "syntax"]
#aliases: ["migrate-from-jekyl"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 1
draft: false
margin-left: 2cm
margin-right: 2cm
margin-top: 1cm
margin-bottom: 2cm
keywords:
- 'k8s'
- 'container'
- 'linux'
- 'python'
subject: 'CV'
---
- <marktaguiad@marktaguiad.dev>
- [marktaguiad.dev](https://marktaguiad.dev/)
- Manila, Philippines
Ambitious IT professional skilled in both Linux/Unix Administration and DevOps. Experienced with different
technologies that involves various systems and development. Maintains a professional attitude while
concentrating on developing technical capabilities.
### Skills
```linux systems administration```
```network administration```
```programming```
```databases```
```devops```
```kubernetes```
```containers```
```webservers```
```git```
```cicd```
```iac```
```proactive monitoring```
**Programming**: Python, Perl, Bash
**Databases**: MySQL, Postgres, sqlite
**Linux**: Alpine, Oracle, Ubuntu, Debian, Arch, OpenSuse
**DevOps**: Ansible, Kubernetes, Podman/Docker, CI/CD, Terraform, IaC
### Experience
### <span>DevOps Engineer, Samsung R&D Institute Philippines (SRPH)</span>
<span><span>October 2023 - October 2024 </span>
- Active monitoring using Prometheus-Grafana Stack.
- Develop, coordinate and administrate Kubernetes infrastructure.
- Created CI pipeline for package build, vulnerability scan and code quality scan.
- Manage and create Virtual Machine in Openstack.
- Building/creating RPM and QCOW2 images.
- VM provision in Proxmox using Terraform/Opentofu.
- Automating server configuration/setup using Ansible.
- Jira ticket resolution for customer production issues.
- Migration of in-house application to Docker/Container and Kubernetes.
- Troubleshooting production environment issues/bugs.
- Creating PoC on new technology related to container orchestration.
- Creating alarm and jobs; monitoring in Prometheus and Alertmanager.
- Linux system administration.
- Bash/Python scripting.
### <span>DevOps Engineer, Quick Suite Trading / Computer Voice Systems</span>
<span>March 2023 - October 2023 </span>
- Active monitoring using Prometheus-Grafana Stack.
- Develop custom prometheus exporter to monitor database (MariaDB) instances.
- Develop, coordinate and administrate Kubernetes infrastructure.
- Administrate and maintian Longhorn block storage for Kubernetes.
- Automate deployments (ansible, Gitlab CI), infrastructure as code (helm).
- Assist developers in building and deploying their software to infrastructure (K8S, Docker, Podman).
- Deploy updates and fixes, and provide Level 2 technical support.
- Build tools to reduce occurrence of errors and improve customer experience.
- Administer and maintain private docker registry.
- Installing, configuring, securing, troubleshooting and maintaining UNIX-like operating systems.
- Administer and maintain end user accounts, permissions, and access rights.
### <span>System Engineer, Amkor Technology</span>
<span>Nov 2021 -- Mar 2023</span>
- Develop, coordinate and administrate Kubernetes infrastructure.
- Spearheaded the development of CI/CD pipeline of all in-house/opensource projects.
- Developed, build docker image of ProcessMaker, and deployed in production environment.
- Setup development and production environment (Docker) for the developer team.
- Setup, configured, and maintained Zabbix infrastructure monitoring tool (containerized) on multiple sites.
- Automate Zabbix (agent deployment, housekeeping) task using ansible automation.
- Created scripts (Perl Zabbix Compatible) for monitoring Software AG webMethods.
- Migrated GoAnywhere MFT running on Windows Server to run on a Linux Server (Oracle Linux 8).
- Deployed, configured, and maintained GoAnywhere MFT (SFTP/FTP) and GoAnywhere Gateway (DMZ)
on multiple sites.
- Deployed, configured, and maintained Tibco Spotfire running alongside Apache NIFI on multiple sites.
- Setup, configured, and maintained Redhat JBOSS in development and production environment.
- Performed standard administration task such as OS installation, troubleshooting, problem resolution,
package installation, software upgrade and system hardening.
- Automate basic task such as system hardening, backup, housekeeping.
- Storage management: cleanup, mount, backup and extend.
- Worked with Developer, DBA, Network Team to resolve their daily issues.
- Wrote shell/python/perl scripts for various system task such as agent deployment, backup systems,
installation, and monitoring.
- Performed troubleshooting, incident management and resolve day to day problem raised by users and
customer.
### <span>Network Operations Center Engineer, Amkor Technology</span>
<span>Jul 2021 -- Nov 2021</span>
- Responsible for proactively monitoring server using such tools as Zabbix, Hobbit Xymon, PRTG and
Jennifer APM.
- Responsible for proactively monitoring network devices using SolarWinds.
- IBM Application System/400; resources, error message and data integrity monitoring.
- Initiating and resolving incident tickets.
- Manage daily and weekly database backup.
- Generate weekly availability reports of servers and network devices.
### Education
### <span>Mapua University</span>
<span>2012 -- 2019</span>
- Bachelor of Science, Electronics Engineering
### <span>Cisco Networking Academy</span>
<span>2018</span>
- Routing and Switching
- Security
### Licenses & Certifications
### <span>Cisco Certified Network Associate (CCNA) - Cisco</span>
<span>Issued June 2021 - Expires June 2024</span>
- CSCO14020527
### <span>Electronics Communication Engineer</span>
<span>April 2025</span>
[Click here to download](/documents/mark-christian-taguiad-resume.pdf)

View File

@@ -0,0 +1,325 @@
---
layout: post
title: "Multi Network Pod with Multus in Kubernetes"
date: 2024-04-19
author: "Mark Taguiad"
tags: ["k3s", "multus", "k8s", "network", "docker", "linux"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 1
toc: false
TocOpen: false
---
# Table of Contents
1. [Installation](#installation)
2. [Multus Manifest](#multus-manifest)
3. [Testing](#testing)
### Installation
> **Note:**
> Check this website for more details on multus. (https://www.redhat.com/en/blog/using-the-multus-cni-in-openshift).
In this lab, we'll be using K3S as it is easy to setup and is production ready. First we would need to disabled flannel-backend, disable-network-policy to install CNI; calico or CNI of your choice. Optional to disble servicelb, unless you're implementing metallb.
```bash
$ curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--flannel-backend=none --cluster-cidr=192.168.0.0/16 --disable-network-policy --disable=traefik --disable servicelb" sh -
```
Install calico and multus.
```bash
$ kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/calico.yaml
$ kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-thick.yml
```
Check k8s status.
```sh
$ mcbtaguiad@tags-kvm-ubuntu:~$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-multus-ds-s7kgz 1/1 Running 1 (50m ago) 6h32m
kube-system calico-kube-controllers-787f445f84-kp274 1/1 Running 1 (50m ago) 6h35m
kube-system coredns-6799fbcd5-hs547 1/1 Running 1 (50m ago) 6h36m
kube-system local-path-provisioner-6c86858495-svqw7 1/1 Running 1 (50m ago) 6h36m
kube-system calico-node-5nx6b 1/1 Running 1 (50m ago) 6h35m
kube-system metrics-server-54fd9b65b-crfp7 1/1 Running 1 (50m ago) 6h36m
```
(Optional) Install CNI plugin. This might come in handy as you progress in exploring/implementing multus.
```sh
$ curl -s -L https://github.com/containernetworking/plugins/releases/download/v1.4.1/cni-plugins-linux-amd64-v1.4.1.tgz | tar xvz - -C /opt/cni/bin
```
### Multus Manifest
We're going to use macvlan. Use NIC used by K8S cluster, in my case I'm using `enp1s0`. A dhcp server is also setup in KVM, with IP pool from 192.168.122.201 to 192.168.122.254, these will be the IPs that the pod can attach to.
```sh
mcbtaguiad@tags-kvm-ubuntu:/opt/cni$ ip r
default via 192.168.122.1 dev enp1s0 proto dhcp src 192.168.122.201 metric 100
blackhole 172.16.58.128/26 proto bird
172.16.58.133 dev cali205ed4120a1 scope link
172.16.58.134 dev caliaeca16354bd scope link
172.16.58.135 dev cali1a14292463c scope link
172.16.58.136 dev cali93c699308e7 scope link
192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.201 metric 100
192.168.122.1 dev enp1s0 proto dhcp scope link src 192.168.122.201 metric 100
```
Create network attachment definition (macvlan).
```bash
cat <<EOF > macvlan-net.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-net
spec:
config: |
{
"name": "macvlan-net",
"cniVersion": "0.3.1",
"plugins": [
{
"cniVersion": "0.3.1",
"type": "macvlan",
"master": "enp1s0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.122.0/24",
"rangeStart": "192.168.122.2",
"rangeEnd": "192.168.122.254",
"routes": [
{
"dst": "0.0.0.0/0",
"gw": "192.168.122.254"
}
]
}
}
]
}
EOF
kubectl create -f macvlan-net.yaml
```
Create test pod/deployment. Add custom defined resources or annotation `k8s.v1.cni.cncf.io/networks: macvlan-net`.
```sh
cat <<EOF > test-multus.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-multus
spec:
selector:
matchLabels:
app: test-multus
replicas: 1
template:
metadata:
labels:
app: test-multus
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-net
spec:
containers:
- name: test-multus
image: testcontainers/helloworld
ports:
- containerPort: 8080
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
EOF
kubectl create -f test-multus.yaml
```
Get pod network status.
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-multus-868b8598b-t89g6 1/1 Running 0 2m36s
$ kubectl describe pod test-multus-868b8598b-t89g6
```
Pod attached to 192.168.122.2.
```yaml
Name: test-multus-868b8598b-t89g6
Namespace: default
Priority: 0
Service Account: default
Node: tags-kvm-ubuntu/192.168.122.201
Start Time: Wed, 24 Apr 2024 11:28:34 +0000
Labels: app=test-multus
pod-template-hash=868b8598b
Annotations: cni.projectcalico.org/containerID: d43ea4cbd08d963167a7b53f5dd7a59fe95acd3e73f5bafb69f7345dcb3e1f82
cni.projectcalico.org/podIP: 172.16.58.137/32
cni.projectcalico.org/podIPs: 172.16.58.137/32
k8s.v1.cni.cncf.io/network-status:
[{
"name": "k8s-pod-network",
"ips": [
"172.16.58.137"
],
"default": true,
"dns": {}
},{
"name": "default/macvlan-net",
"interface": "net1",
"ips": [
"192.168.122.2"
],
"mac": "26:39:5a:00:db:60",
"dns": {},
"gateway": [
"192.168.122.254"
]
}]
k8s.v1.cni.cncf.io/networks: macvlan-net
Status: Running
IP: 172.16.58.137
IPs:
IP: 172.16.58.137
Controlled By: ReplicaSet/test-multus-868b8598b
Containers:
test-multus:
Container ID: containerd://c84d601e64a094f8ae8a29c60440392054abe8c7f1ec491694bc08f8c4a2ada9
Image: testcontainers/helloworld
Image ID: docker.io/testcontainers/helloworld@sha256:4ee5a832ef6eee533df7224b80d4cceb9ab219599014f408d0b69690be94c396
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 24 Apr 2024 11:28:46 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-287fk (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-287fk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26s default-scheduler Successfully assigned default/test-multus-868b8598b-t89g6 to tags-kvm-ubuntu
Normal AddedInterface 22s multus Add eth0 [172.16.58.137/32] from k8s-pod-network
Normal AddedInterface 22s multus Add net1 [192.168.122.2/24] from default/macvlan-net
Normal Pulling 22s kubelet Pulling image "testcontainers/helloworld"
Normal Pulled 14s kubelet Successfully pulled image "testcontainers/helloworld" in 7.796s (7.796s including waiting)
Normal Created 14s kubelet Created container test-multus
Normal Started 14s kubelet Started container test-multus
```
### Testing
Attach to the pod and we can verify that another network is attached `net1@tunl0` .
```sh
$ kubectl exec -it test-multus-868b8598b-t89g6 -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
3: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1480 qdisc noqueue state UP qlen 1000
link/ether 7e:15:53:bb:6b:46 brd ff:ff:ff:ff:ff:ff
inet 172.16.58.137/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::7c15:53ff:febb:6b46/64 scope link
valid_lft forever preferred_lft forever
4: net1@tunl0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 26:39:5a:00:db:60 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.2/24 brd 192.168.122.255 scope global net1
valid_lft forever preferred_lft forever
inet6 fe80::2439:5aff:fe00:db60/64 scope link
valid_lft forever preferred_lft forever
```
Curl and ping the pod.
```sh
# test using the host network (kvm host)
$ mcbtaguiad@pop-os:~/develop$ ip r
default via 192.168.254.254 dev wlp4s0 proto dhcp metric 600
169.254.0.0/16 dev virbr1 scope link metric 1000 linkdown
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.18.0.0/16 dev br-f6e0458d2d6c proto kernel scope link src 172.18.0.1 linkdown
192.168.100.0/24 dev virbr1 proto kernel scope link src 192.168.100.1 linkdown
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
192.168.254.0/24 dev wlp4s0 proto kernel scope link src 192.168.254.191 metric 600
# ping test
mcbtaguiad@pop-os:~/develop$ ping 192.168.122.2 -c5
PING 192.168.122.2 (192.168.122.2) 56(84) bytes of data.
64 bytes from 192.168.122.2: icmp_seq=1 ttl=64 time=0.314 ms
64 bytes from 192.168.122.2: icmp_seq=2 ttl=64 time=0.323 ms
64 bytes from 192.168.122.2: icmp_seq=3 ttl=64 time=0.389 ms
64 bytes from 192.168.122.2: icmp_seq=4 ttl=64 time=0.196 ms
64 bytes from 192.168.122.2: icmp_seq=5 ttl=64 time=0.151 ms
--- 192.168.122.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4103ms
rtt min/avg/max/mdev = 0.151/0.274/0.389/0.087 ms
# curl test
$ mcbtaguiad@pop-os:~/develop$ curl 192.168.122.2:8080
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Hello world</title>
<style>
body {
font-family: sans-serif;
max-width: 38rem;
padding: 2rem;
margin: auto;
}
* {
max-width: 100%;
}
</style>
</head>
<body>
<h1>Hello world</h1>
<img src="logo.png" alt="Testcontainers logo"/>
<p>
This is a test server used for Testcontainers' own self-tests. Find out more about this image on <a href="https://github.com/testcontainers/helloworld">GitHub</a>.
</p>
<p>
Find out more about Testcontainers at <a href="https://www.testcontainers.org">www.testcontainers.org</a>.
</p>
<p>
Hit <a href="/ping"><code>/ping</code></a> for a simple test response.
</p>
<p>
Hit <a href="/uuid"><code>/uuid</code></a> for a UUID that is unique to this running instance of the container.
</p>
</body>
</html>
```

View File

@@ -0,0 +1,66 @@
---
title: "Obsidian Cloud Vault"
date: 2025-12-29
author: "Mark Taguiad"
tags: ["obsidian", "docker", "self-hosted"]
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
Obsidian setup using self-hosted CouchDB or Cloudflared R2 DB.
# Table of Contents
1. [CouchDB](#couchdb)
2. [Cloudflare R2 Database](#cloudflare_r2_database)
3. [Obsidian Setup](#obsidian_setup)
### CouchDB
This will be running on a local server and tunneled to pangolin proxy. If you are new to these, visit this [link](https://marktaguiad.dev/post/pangolin-docker/). I've setup its subdomain to *couchdb.yourdomain.com*, to access the admin page navigate to (*/_utils*) *https://ccouchdb.yourdomain.com/_utils/*.
*compose.yaml*
```
version: '3.8'
services:
couchdb:
image: couchdb:latest
container_name: couchdb
restart: always
ports:
- "5984:5984"
volumes:
- /srv/volume/couchdb/data:/opt/couchdb/data
- couchdb_config:/opt/couchdb/etc/local.d
environment:
COUCHDB_USER: your_username
COUCHDB_PASSWORD: your_password
networks:
- pangolin
volumes:
couchdb_config:
networks:
pangolin:
name: pangolin
external: true
```
### Cloudflare R2 Database
Go to your cloudflare dashboard and navigate to Database. Create R2 bucket, once created. Create API key to this bucket. Take note and make sure not to expose it in public domain. If you are broke like myself, free tier has a lot of limitation. Just monitor your storage from time-to-time, specially if you uploaded big files in obsidian.
- 10 GB-month / month
- 1 million requests / month
- 10 million requests / month
### Obsidian Setup
Install the [app](https://obsidian.md/download). Navigate to setting and then enable community plugin. Then browse plugin, install *Self-hosted LiveSync*. Once installed, run the setup wizard.

View File

@@ -0,0 +1,203 @@
---
title: "Reverse Tunneled Proxy with Pangolin"
date: 2025-12-16
author: "Mark Taguiad"
tags: ["pangolin", "docker", "network"]
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
Pangolin is a self-hosted tunneled reverse proxy management server with identity and access management, designed to securely expose private resources through encrypted WireGuard tunnels running in user space. With Pangolin, you retain full control over your infrastructure while providing a user-friendly and feature-rich solution for managing proxies, authentication, and access, and simplifying complex network setups, all with a clean and simple dashboard web UI.
# Table of Contents
1. [Requirements](#requirements)
2. [DNS Setup](#dns-setup)
3. [Local Server Setup](#local-server-setup)
4. [Adding Resources](#adding-resources)
### Requirements
Setup will be using Docker, please look for alternative method in the official documentation.
(This is just a log for my setup, and I'm pretty sure no one is reading this except me haha)
- VPS with public IP
- Local Server
- Domain
### DNS Setup
Make sure to add this two record.
```
yourdomain.com A Auto YourPublicIP
*.yourdomain.com A Auto yourPublicIP
```
### VPS Setup
Before the installation, make sure no application is using port 80 and 443. You can check using `netstat -tulpn'.
```
root@engago:~# netstat -tulpn | grep 443
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 3314824/docker-prox
tcp6 0 0 :::443 :::* LISTEN 3314829/docker-prox
root@engago:~# netstat -tulpn | grep 80
tcp 0 0 0.0.0.0:993 0.0.0.0:* LISTEN 3314780/docker-prox
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3314803/docker-prox
tcp6 0 0 :::80 :::* LISTEN 331480
```
Download the installer.
```
curl -fsSL https://static.pangolin.net/get-installer.sh | bash
```
Execute the installer with root privileges:
`sudo ./installer`
```
Welcome to the Pangolin installer!
This installer will help you set up Pangolin on your server.
Please make sure you have the following prerequisites:
- Open TCP ports 80 and 443 and UDP ports 51820 and 21820 on your VPS and firewall.
Lets get started!
=== Basic Configuration ===
Do you want to install the Enterprise version of Pangolin? The EE is free for personal use or for businesses making less than 100k USD annually. (yes/no): no
Enter your base domain (no subdomain e.g. example.com): yourdomain.com
Enter the domain for the Pangolin dashboard (default: pangolin.yourdomain.com):
Enter email for Let's Encrypt certificates: youremail.com
Do you want to use Gerbil to allow tunneled connections (yes/no) (default: yes): yes
=== Email Configuration ===
Enable email functionality (SMTP) (yes/no) (default: no): no
=== Advanced Configuration ===
Is your server IPv6 capable? (yes/no) (default: yes): no
Do you want to download the MaxMind GeoLite2 database for geoblocking functionality? (yes/no) (default: yes): yes
=== Generating Configuration Files ===
Configuration files created successfully!
=== Downloading MaxMind Database ===
Downloading MaxMind GeoLite2 Country database...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 4797k 100 4797k 0 0 3140k 0 0:00:01 0:00:01 --:--:-- 3140k
tar: GeoLite2-Country_20260116/GeoLite2-Country.mmdb: Cannot change ownership to uid 0, gid 0: Operation not permitted
tar: GeoLite2-Country_20260116/COPYRIGHT.txt: Cannot change ownership to uid 0, gid 0: Operation not permitted
tar: GeoLite2-Country_20260116/LICENSE.txt: Cannot change ownership to uid 0, gid 0: Operation not permitted
tar: GeoLite2-Country_20260116: Cannot change ownership to uid 0, gid 0: Operation not permitted
tar: Exiting with failure status due to previous errors
Error downloading MaxMind database: failed to extract GeoLite2 database: exit status 2
You can download it manually later if needed.
=== Starting installation ===
Would you like to install and start the containers? (yes/no) (default: yes): yes
Would you like to run Pangolin as Docker or Podman containers? (default: docker):
Would you like to configure ports >= 80 as unprivileged ports? This enables docker containers to listen on low-range ports.
Pangolin will experience startup issues if this is not configured, because it needs to listen on port 80/443 by default.
The installer is about to execute "echo 'net.ipv4.ip_unprivileged_port_start=80' >> /etc/sysctl.conf && sysctl -p". Approve? (yes/no) (default: yes): yes
net.ipv4.ip_unprivileged_port_start = 80
```
This will create three docker container.
```
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
5a7d2951ae5c traefik:v3.6 "/entrypoint.sh --co…" 6 days ago Up 2 days traefik
23b823c4971c fosrl/gerbil:1.3.0 "/entrypoint.sh --re…" 6 days ago Up 2 days 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:21820->21820/udp, [::]:21820->21820/udp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp, 0.0.0.0:51820->51820/udp, [::]:51820->51820/udp gerbil
18d7560ce63e fosrl/pangolin:ee-1.14.1 "docker-entrypoint.s…" 6 days ago Up 2 days (healthy) pangolin
7ead48db901e ghcr.io/mcbtaguiad/sipup-luna-web:main "/bin/ash /docker-en…" 6 days ago Up 2 days (healthy) 8000/tcp
```
Wait for the container to fully initialized and then visit the subdomain of pangolin container, in our case we set to default `pangolin.yourdomain.com`. Go with the account creation and you are set. Will discuss later how to add site and resources that will be tunneled.
### Local Server Setup
Now we need to add this server to using pangoling client Newt. Navigate to your pangolin dashboard and go to site, then click add site - add name for this site. For this example we will be using docker, on the Operating System option - click on Docker. Copy the content for the compose.yml, as we will be modifying it for it to work on docker network (using docker dns) and not needing to expose is using ip and port. With this method, we can set application to be not exposed outside and it will just be directly tunneled to the pangolin proxy.
[![imagen](/images/pangolin-docker/pangolin-001.png)](/images/pangolin-docker/pangolin-001.png)
Create a docker network.
`docker network create pangolin-proxy`
Now create the compose file for Newt container, add docker network.
*compose.yml*
```
services:
newt:
image: fosrl/newt
container_name: newt
restart: unless-stopped
environment:
- PANGOLIN_ENDPOINT=https://pangolin.yourdomain.com
- NEWT_ID=w030vjt04d336nl
- NEWT_SECRET=719t1lksdk9ma7j3nlldh0zy7atr8f42aal9ag1u5eg1zt0u5q
networks:
- pangolin-proxy
networks:
pangolin-proxy:
name: pangolin-proxy
external: true
```
Run the container. Now everytime you are running an application that you want to expose or tunnel to the pangolin proxy, make sure to add the network section. Check the example applicatin below.
```
services:
jellyfin:
image: jellyfin/jellyfin
container_name: jellyfin
ports:
- 8096:8096/tcp
- 7359:7359/udp
volumes:
- /config:/config
- type: bind
source:/media
target: /media
restart: unless-stopped
networks:
- pangolin
networks:
pangolin:
name: pangolin
external: true
```
We mention that we will not be using the IP and Port when we are gonna tunnel the application, we can utilized the built-in DNS of docker. Like in kubernetes using services, in here the DNS record for the application will be the container_name we set. To check use `docker ps`.
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
d815bd083d62 jellyfin/jellyfin "/jellyfin/jellyfin" 30 hours ago Up 30 hours (healthy) 0.0.0.0:7359->7359/udp, [::]:7359->7359/udp, 0.0.0.0:8096->8096/tcp, [::]:8096->8096/tcp jellyfin
```
### Adding Resources
Now for the exciting part (kidding). Navigate to resources and click on public then add resources. Set the subdomain you preffer and set the target to the Site added earlier. Like we discussed earlier, in our example it can be set to IP or just the DNS (jellyfin) and port is 8096. Create resources, additional config would be if you want your application to have extra layer of security, Pangolin has a feature to use SSO (login first) to secure the web application. This can be configured in the resources section.
[![imagen](/images/pangolin-docker/pangolin-002.png)](/images/pangolin-docker/pangolin-002.png)

View File

@@ -0,0 +1,23 @@
---
title: "Board Exam: Conquered"
date: 2025-04-12
author: "Mark Taguiad"
tags: ["ece", "board-exam"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
Agyaman nak ken iragsak ko nga ipakaamo nga naipasak ti Electronics Engineering Board Exam.
Dandanni lima tawen nga haan ko nga pinanpansin, ngem kanayun ko latta panpanunuten ken ibagbaga tuy bagik nga maipasak to latta. Isu nga idi napalabas nga tawem, naggikkat nak jay ubrak ken nagtalek nakun nga agreview para jay exam.
Dakkel nga pasalamat ko ken Inang ken Tatang ko, agraman payen dagiti kakabagyak ken gagayyem. Saan laeng a lisensia ti naalak, no di ket oportunidad ken pannakaammo nga agserbi ken agaramid iti napintas a banag kas maysa nga Engineer.
Agtultuloy ti biag — ken agtultuloy ti panag-adal. Agyamanak kadakayo amin nga namati kaniak.
[![imagen](/images/board-exam/ece.png)](/images/board-exam/ece.png)

View File

@@ -0,0 +1,139 @@
---
title: "Creating a Proxmox Debian cloud-init Template"
date: 2024-08-21
author: "Mark Taguiad"
tags: ["proxmox", "qemu", "vm", "cloud-init", "debian"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
### Background
Finally have some time to document and publish this blog due to recent Philippines long weekend. This is the first part of the blog, as it is necessary to learn how to create template that will be used in provisioning VM and will be automatically created and deploy with Opentofu/Terraform [(part 2)](/post/tf-tofu-proxmox).
What set apart a cloud-init image vs. a fully pledge image, basically it is a clone of a miminal image. A clean installed system, that can be installed on the fly, unlike the traditional image where you have to install it using GUI or manually. With cloud-init, hostname, user/password, network and ssh-keys can be set with the config file.
# Table of Contents
1. [Download the base image](#download-the-base-image)
2. [Install qemu-guest-agent](#install-qemu-guest-agent)
3. [Create Proxmox virtual machine](#create-proxmox-virtual-machine)
4. [Covert VM to Template](#covert-vm-to-template)
5. [Optional Starting the VM](#optional-starting-the-vm)
6. [Using Terraform or Opentofu to automate VM creation](#using-terraform-or-opentofu-to-automate-vm-creation)
### Download the base image
On this part you can change the image to your desired distro, but for this lab we'll be using Debian latest base image - [https://cloud.debian.org/images/cloud/](https://cloud.debian.org/images/cloud/).
```
wget https://cloud.debian.org/images/cloud/bookworm/20240717-1811/debian-12-generic-amd64-20240717-1811.qcow2
```
### Install qemu-guest-agent
Debian cloud-init images doesn't include qemu-guest-agent by default. To enable it we need virt-customize tool.
Install package
```
apt install libguestfs-tools -y
```
Then install qemu-guest-agent to the image.
```
virt-customize -a debian-12-generic-amd64-20240717-1811.qcow2 --install qemu-guest-agent
```
### Create Proxmox virtual machine
> **Note:**
> Value here can be changed, take note on the VM name as it will be used in part 2.
Create a VM with VMID=1002, VM name with "debian-20240717-cloudinit-template", with basic resources (2 core, 2048Mi ram), with a virtio adapter network.
```
qm create 1002 --name "debian-20240717-cloudinit-template" --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0
```
Import the image to storage (storage pool will depends on you setup) and setting disk 0 to use the image.
```
qm importdisk 1002 debian-12-generic-amd64-20240717-1811.qcow2 tags-nvme-thin-pool1
qm set 1002 --scsihw virtio-scsi-pci --scsi0 tags-nvme-thin-pool1:vm-1002-disk-0
```
Set boot to disk and mounting cloudinit to ide1.
```
qm set 1002 --boot c --bootdisk scsi0
qm set 1002 --ide1 tags-nvme-thin-pool1:cloudinit
```
Set tty to serial0.
```
qm set 1002 --serial0 socket --vga serial0
```
Enable qemu-guest-agent.
```
qm set 1002 --agent enabled=1
```
### Covert VM to Template
There are two ways to covert it to template.
Option 1: Using the terminal
```
qm template 1002
```
Option 2: GUI
Navigate to Proxmox gui, notice that VM 1002 is listed in the VM list. Click on the VM and click 'More' option and select 'Convert to template'.
At this point you can proceed to Part 2.
To unconvert to template, navigate to dir `/etc/pve/qemu-server`. Set template equals to 0.
```
vim /etc/pve/qemu-server/1001.conf
agent: enabled=1
boot: c
bootdisk: scsi0
cores: 2
ide1: tags-nvme-thin-pool1:vm-1002-cloudinit,media=cdrom
memory: 2048
meta: creation-qemu=8.1.5,ctime=1724425025
name: debian-20240717-cloudinit-template
net0: virtio=BC:24:11:90:D0:08,bridge=vmbr0
scsi0: tags-nvme-thin-pool1:base-1002-disk-0,size=2G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=1c1c60fa-62c4-426d-969e-6ebe18ca1d07
template: 0
vga: serial0
vmgenid: 53e2d08a-c57f-4539-abf9-6863e2635ded
```
### Optional Starting the VM
Let first add ssh public key, most cloud-init image has disabled user/password login.
```
qm set 1002 --sshkey ~/.ssh/id_rsa.pub
```
Set network for the VM.
```
qm set 1002 --ipconfig0 ip=192.168.254.102/24,gw=192.168.254.254
```
To start the VM.
```
qm start 1002
```
To stop the VM.
```
qm stop 1002
```
To destroy the VM.
```
qm destroy 1002
```
### Using Terraform or Opentofu to automate VM creation
Part 2 - [Automate VM Provisition with Terraform or Opentofu](/post/tf-tofu-proxmox)

View File

@@ -0,0 +1,37 @@
---
title: "Site Migration; Django to Hugo"
date: 2023-03-04T18:18:31+08:00
author: "Mark Taguiad"
tags: ["hugo", "django", "python"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
[![imagen](/images/002-mark-tagsdev-click.jpg)](/images/002-mark-tagsdev-click.jpg)
<!-- ![002 mark tagsdev click](http://chevereto.marktaguiad.dev/images/2024/08/31/002-mark-tagsdev-click.jpg) -->
Back in 2021, I've deployed my first website [mark.tagsdev.click](https://mark.tagsdev.click) running on Django as its web framework. But a little content or progress was publish in it. This decision was to enable me to focus more on content creation and DevOps related projects/learnings.
### Why Django?
It has been a journey, everything was built from scratch, from html/js/cs to integrating it to the web framework and building it into container. Django wasn't my first web framework that I used, I created my first API using Flask. But I have always been a sucker for more complicated things/technology; hence Django.
Building my first website/blog on Django is a bit overkill. But this has been my gateway to Web-development to Linux Administration and now being DevOps.
The old [website](https://mark.tagsdev.click) will still be up and running as I intend to keep the development active.
### Why Hugo?
I needed to migrate the site to something that is lower in maintenance and can easily generate template or pages for my blog/post. Since this translate Markdown language to HTML, I can focus less on thinkering on the codes.
Hugo is a Fast & Modern Static Website Engine. It's built on top of Go, this would enable me to learn Go or maybe PHP :grin:.
If you are interested in trying Hugo, please visit this [site](https://gohugo.io/) to get started. It has a good documentation it even covered migration from different provider or from other static site generator.

View File

@@ -0,0 +1,29 @@
---
title: "Subic Audax 200"
date: 2025-12-27
author: "Mark Taguiad"
tags: ["ride", "cycling", "audax"]
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
[![imagen](/images/cycling/subic-audax-001.jpg)](/images/cycling/subic-audax-001.jpg)
**Route:** <a href="/route/Audax_200_Subic.gpx" download="Audax_200_Subic.gpx"> Audax_200_Subic.gpx </a>
**Distance:** 200Km
**Elevation Gain:** 817m
**Ride ID:** [link](https://strava.app.link/nJXXPnIO1Zb)
**After Ride Thoughts:** Just for the clout (kidding). My first ever event and it is not a race. Just happy to have finished it and a good change in scenery. Subic was a pretty place to go for a bike ride, I would have love to go for more ride with no time constrain just you with the nature and your traumas creeping through your mind hahaha.
**To ride it again?:** Yes, if for higher category.
{{< imageviewer images="/images/cycling/subic-audax-002.jpg,/images/cycling/subic-audax-003.jpg,/images/cycling/subic-audax-004.jpg,/images/cycling/subic-audax-005.jpg,/images/cycling/subic-audax-006.jpg,/images/cycling/subic-audax-007.jpg,/videos/cycling/subic-audax-001.mp4,/videos/cycling/subic-audax-002.mp4,/videos/cycling/subic-audax-003.mp4,/videos/cycling/subic-audax-004.mp4" >}}

View File

@@ -0,0 +1,764 @@
---
title: "Automate VM Provisition with Terraform or Opentofu"
date: 2024-08-22
author: "Mark Taguiad"
tags: ["proxmox", "qemu", "vm", "opentofu", "terraform"]
ShowToc: true
TocOpen: false
UseHugoToc: true
weight: 2
TocOpen: false
---
### Background
Been using Terraform/Opentofu when I moved my homelab to dedicated server (old laptop and PC), previously from a bunch of Raspberry Pi.
Using this technology has made my learning in DevOps more optimal or faster.
With recent announcement of Hashicorp to change Terraform's opensource licence to a propriety licence, we'll be using Opentofu (just my preference, command will still be relatively similar).
For this lab you can subtitute opentofu `tofu` command with terraform `tf`.
# Table of Contents
1. [Install Opentofu](#install-opentofu)
2. [Add Permission to user](#add-permission-to-user)
3. [Generate Proxmox API key](#generate-proxmox-api-key)
4. [Opentofu init](#opentofu-init)
5. [Opentofu plan](#opentofu-plan)
6. [Opentofu apply](#opentofu-apply)
7. [Opentofu destroy](#opentofu-destroy)
8. [Optional Remote tfstate backup](#optional-remote-tfstate-backup)
### Install Opentofu
You can check this [link](https://opentofu.org/docs/intro/install/) to install base on your distro.
But for this lab, we'll be using Ubuntu.
```
# Download the installer script:
curl --proto '=https' --tlsv1.2 -fsSL https://get.opentofu.org/install-opentofu.sh -o install-opentofu.sh
# Alternatively: wget --secure-protocol=TLSv1_2 --https-only https://get.opentofu.org/install-opentofu.sh -O install-opentofu.sh
# Give it execution permissions:
chmod +x install-opentofu.sh
# Please inspect the downloaded script
# Run the installer:
./install-opentofu.sh --install-method deb
# Remove the installer:
rm -f install-opentofu.sh
```
### Add Permission to user
Navigate to Datacenter > API Tokens > Permission > Add role 'PVEVMAdmin'.
[![imagen](/images/prox-tofu/tofu3.png)](/images/prox-tofu/tofu3.png)
<!-- ![tofu](http://chevereto.marktaguiad.dev/images/2024/08/31/tofu3.png) -->
### Generate Proxmox API key
> **Note:**
> For unsecure method you can also use user/password.
Navigate to Datacenter > API Tokens > Add. Input Token ID of your choice, make sure to untick 'Privilege Separation'
[![imagen](/images/prox-tofu/tofu1.png)](/images/prox-tofu/tofu1.png)
<!-- ![tofu](http://chevereto.marktaguiad.dev/images/2024/08/31/tofu-1.png) -->
Make sure to note the generated key since it will only be displayed once.
[![imagen](/images/prox-tofu/tofu2.png)](/images/prox-tofu/tofu2.png)
<!-- ![tofu](http://chevereto.marktaguiad.dev/images/2024/08/31/tofu2.png) -->
### Opentofu init
Opentofu has three stages; `init`, `plan`, `apply`. Let as first describe init phase.
Create the project/lab directory and files.
```
mkdir tofu && cd tofu
touch main.tf providers.tf terraform.tfvars variables.tf
```
Define the provider, in our case it will be from Telmate/proxmox.
*main.tf*
```
terraform {
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "3.0.1-rc1"
}
}
}
```
Now we can define the api credentials.
*providers.tf*
```
provider "proxmox" {
pm_api_url = var.pm_api_url
pm_api_token_id = var.pm_api_token_id
pm_api_token_secret = var.pm_api_token_secret
pm_tls_insecure = true
}
```
To make it more secure, variable are set in a different file (terraform.tfvars, variables.tf).
Define the variables.
*variables.tf*
```
variable "ssh_key" {
default = "ssh"
}
variable "proxmox_host" {
default = "tags-p51"
}
variable "template_name" {
default = "debian-20240717-cloudinit-template"
}
variable "pm_api_url" {
default = "https://127.0.0.1:8006/api2/json"
}
variable "pm_api_token_id" {
default = "user@pam!token"
}
variable "pm_api_token_secret" {
default = "secret-api-token"
}
variable "k8s_namespace_state" {
default = "default"
}
```
Variables are sensitive so make sure to add this file it in *.gitignore*.
*terraform.tfvars*
```
ssh_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAsABgQDtf3e9lQR1uAypz4nrq2nDj0DvZZGONku5wO+M87wUVTistrY8REsWO2W1N/v4p2eX30Bnwk7D486jmHGpXFrpHM0EMf7wtbNj5Gt1bDHo76WSci/IEHpMrbdD5vN8wCW2ZMwJG4J8dfFpUbdmUDWLL21Quq4q9XDx7/ugs1tCZoNybgww4eCcAi7/PAmXcS/u9huUkyiX4tbaKXQx1co7rTHd7f2u5APTVMzX0CdV9Ezc6l8I+LmjZ9rvQav5N1NgFh9B60qk9QJAb8AK9+aYy7bnBCQJ/BwIkWKYmLoVBi8j8v8UVhVdQMvQxLaxz1YcD8pbgU5s1O2nxM1+TqeGxrGHG6f7jqxhGWe21I7i8HPvOHNJcW4oycxFC5PNKnXNybEawE23oIDQfIG3+EudQKfAkJ3YhmrB2l+InIo0Wi9BHBIUNPzTldMS53q2teNdZR9UDqASdBdMgp4Uzfs1+LGdE5ExecSQzt4kZ8+o9oo9hmee4AYNOTWefXdip1= test@host"
proxmox_host = "proxmox node"
template_name = "debian-20240717-cloudinit-template"
pm_api_url = "https://192.168.254.101:8006/api2/json"
pm_api_token_id = "root@pam!tofuapi"
pm_api_token_secret = "apikeygenerated"
```
Save the files and initialize Opentofu. If all goes well, the provider will be installed and Opentofu has been initialized.
```
[mcbtaguiad@tags-t470 tofu]$ tofu init
Initializing the backend...
Initializing provider plugins...
- terraform.io/builtin/terraform is built in to OpenTofu
- Finding telmate/proxmox versions matching "3.0.1-rc1"...
- Installing telmate/proxmox v3.0.1-rc1...
- Installed telmate/proxmox v3.0.1-rc1. Signature validation was skipped due to the registry not containing GPG keys for this provider
OpenTofu has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that OpenTofu can guarantee to make the same selections by default when
you run "tofu init" in the future.
OpenTofu has been successfully initialized!
You may now begin working with OpenTofu. Try running "tofu plan" to see
any changes that are required for your infrastructure. All OpenTofu commands
should now work.
If you ever set or change modules or backend configuration for OpenTofu,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
```
### Opentofu plan
Let's now create our VM. We will be using the template created in [part 1](/post/proxmox-create-template).
*main.tf*
```
terraform {
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "3.0.1-rc1"
}
}
}
resource "proxmox_vm_qemu" "test-vm" {
count = 1
name = "test-vm-${count.index + 1}"
desc = "test-vm-${count.index + 1}"
tags = "vm"
target_node = var.proxmox_host
vmid = "10${count.index + 1}"
clone = var.template_name
cores = 8
sockets = 1
memory = 8192
agent = 1
bios = "seabios"
scsihw = "virtio-scsi-pci"
bootdisk = "scsi0"
sshkeys = <<EOF
${var.ssh_key}
EOF
os_type = "cloud-init"
cloudinit_cdrom_storage = "tags-nvme-thin-pool1"
ipconfig0 = "ip=192.168.254.1${count.index + 1}/24,gw=192.168.254.254"
disks {
scsi {
scsi0 {
disk {
backup = false
size = 25
storage = "tags-nvme-thin-pool1"
emulatessd = false
}
}
scsi1 {
disk {
backup = false
size = 64
storage = "tags-nvme-thin-pool1"
emulatessd = false
}
}
scsi2 {
disk {
backup = false
size = 64
storage = "tags-hdd-thin-pool1"
emulatessd = false
}
}
}
}
network {
model = "virtio"
bridge = "vmbr0"
firewall = true
link_down = false
}
}
```
Save the file and we can run Opentofu plan command.
```
[mcbtaguiad@tags-t470 tofu]$ tofu plan
OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
OpenTofu will perform the following actions:
# proxmox_vm_qemu.test-vm[0] will be created
+ resource "proxmox_vm_qemu" "test-vm" {
+ additional_wait = 5
+ agent = 1
+ automatic_reboot = true
+ balloon = 0
+ bios = "seabios"
+ boot = (known after apply)
+ bootdisk = "scsi0"
+ clone = "debian-20240717-cloudinit-template"
+ clone_wait = 10
+ cloudinit_cdrom_storage = "tags-nvme-thin-pool1"
+ cores = 8
+ cpu = "host"
+ default_ipv4_address = (known after apply)
+ define_connection_info = true
+ desc = "test-vm-1"
+ force_create = false
+ full_clone = true
+ guest_agent_ready_timeout = 100
+ hotplug = "network,disk,usb"
+ id = (known after apply)
+ ipconfig0 = "ip=192.168.254.11/24,gw=192.168.254.254"
+ kvm = true
+ linked_vmid = (known after apply)
+ memory = 8192
+ name = "test-vm-1"
+ nameserver = (known after apply)
+ onboot = false
+ oncreate = false
+ os_type = "cloud-init"
+ preprovision = true
+ reboot_required = (known after apply)
+ scsihw = "virtio-scsi-pci"
+ searchdomain = (known after apply)
+ sockets = 1
+ ssh_host = (known after apply)
+ ssh_port = (known after apply)
+ sshkeys = <<-EOT
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtf3e9lQR1uAypz4nrq2nDj0DvZZGONku5wO+M87wUVTistrY8REsWO2W1N/v4p2eX30Bnwk7D486jmHGpXFrpHM0EMf7wtbNj5Gt1bDHo76WSci/IEHpMrbdD5vN8wCW2ZMwJG4JC8lfFpUbdmUDWLL21Quq4q9XDx7/ugs1tCZoNybgww4eCcAi7/GAmXcS/u9huUkyiX4tbaKXQx1co7rTHd7f2u5APTVMzX0C1V9Ezc6l8I+LmjZ9rvQav5N1NgFh9B60qk9QJAb8AK9+aYy7bnBCBJ/BwIkWKYmLoVBi8j8v8UVhVdQMvQxLax41YcD8pbgU5s1O2nxM1+TqeGxrGHG6f7jqxhGWe21I7i8HPvOHNJcW4oycxFC5PNKnXNybEawE23oIDQfIG3+EudQKfAkJ3YhmrB2l+InIo0Wi9BHBIUNPzTldMS53q2teNdZR9UDqASdBdMgp4Uzfs1+LGdE5ExecSQzt4kZ8+o9oo9hmee4AYNOTWefXdip0= mtaguiad@tags-p51
EOT
+ tablet = true
+ tags = "vm"
+ target_node = "tags-p51"
+ unused_disk = (known after apply)
+ vcpus = 0
+ vlan = -1
+ vm_state = "running"
+ vmid = 101
+ disks {
+ scsi {
+ scsi0 {
+ disk {
+ backup = false
+ emulatessd = false
+ format = "raw"
+ id = (known after apply)
+ iops_r_burst = 0
+ iops_r_burst_length = 0
+ iops_r_concurrent = 0
+ iops_wr_burst = 0
+ iops_wr_burst_length = 0
+ iops_wr_concurrent = 0
+ linked_disk_id = (known after apply)
+ mbps_r_burst = 0
+ mbps_r_concurrent = 0
+ mbps_wr_burst = 0
+ mbps_wr_concurrent = 0
+ size = 25
+ storage = "tags-nvme-thin-pool1"
}
}
+ scsi1 {
+ disk {
+ backup = false
+ emulatessd = false
+ format = "raw"
+ id = (known after apply)
+ iops_r_burst = 0
+ iops_r_burst_length = 0
+ iops_r_concurrent = 0
+ iops_wr_burst = 0
+ iops_wr_burst_length = 0
+ iops_wr_concurrent = 0
+ linked_disk_id = (known after apply)
+ mbps_r_burst = 0
+ mbps_r_concurrent = 0
+ mbps_wr_burst = 0
+ mbps_wr_concurrent = 0
+ size = 64
+ storage = "tags-nvme-thin-pool1"
}
}
+ scsi2 {
+ disk {
+ backup = false
+ emulatessd = false
+ format = "raw"
+ id = (known after apply)
+ iops_r_burst = 0
+ iops_r_burst_length = 0
+ iops_r_concurrent = 0
+ iops_wr_burst = 0
+ iops_wr_burst_length = 0
+ iops_wr_concurrent = 0
+ linked_disk_id = (known after apply)
+ mbps_r_burst = 0
+ mbps_r_concurrent = 0
+ mbps_wr_burst = 0
+ mbps_wr_concurrent = 0
+ size = 64
+ storage = "tags-hdd-thin-pool1"
}
}
}
}
+ network {
+ bridge = "vmbr0"
+ firewall = true
+ link_down = false
+ macaddr = (known after apply)
+ model = "virtio"
+ queues = (known after apply)
+ rate = (known after apply)
+ tag = -1
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so OpenTofu can't guarantee to take exactly these actions if you run "tofu apply" now.
```
### Opentofu apply
After plan command (review the output summary of tofu plan), we can now create the VM. Since we declared the count as 1 it will create 1 VM.
Depending on the hardwarde on your cluster, it would take usually around 1 to 2 minutes to provision 1 VM.
```
[mcbtaguiad@tags-t470 tofu]$ tofu apply
OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
OpenTofu will perform the following actions:
# proxmox_vm_qemu.test-vm[0] will be created
+ resource "proxmox_vm_qemu" "test-vm" {
+ additional_wait = 5
+ agent = 1
+ automatic_reboot = true
+ balloon = 0
+ bios = "seabios"
+ boot = (known after apply)
+ bootdisk = "scsi0"
+ clone = "debian-20240717-cloudinit-template"
+ clone_wait = 10
+ cloudinit_cdrom_storage = "tags-nvme-thin-pool1"
+ cores = 8
+ cpu = "host"
+ default_ipv4_address = (known after apply)
+ define_connection_info = true
+ desc = "test-vm-1"
+ force_create = false
+ full_clone = true
+ guest_agent_ready_timeout = 100
+ hotplug = "network,disk,usb"
+ id = (known after apply)
+ ipconfig0 = "ip=192.168.254.11/24,gw=192.168.254.254"
+ kvm = true
+ linked_vmid = (known after apply)
+ memory = 8192
+ name = "test-vm-1"
+ nameserver = (known after apply)
+ onboot = false
+ oncreate = false
+ os_type = "cloud-init"
+ preprovision = true
+ reboot_required = (known after apply)
+ scsihw = "virtio-scsi-pci"
+ searchdomain = (known after apply)
+ sockets = 1
+ ssh_host = (known after apply)
+ ssh_port = (known after apply)
+ sshkeys = <<-EOT
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtf3e9lQR1uAypz4nrq2nDj0DvZZGONku5wO+M87wUVTistrY8REsWO2W1N/v4p2eX30Bnwk7D486jmHGpXFrpHM0EMf7wtbNj5Gt1bDHo76WSci/IEHpMrbdD5vN8wCW2ZMwJG4JC8lfFpUbdmUDWLL21Quq4q9XDx7/ugs1tCZoNybgww4eCcAi7/GAmXcS/u9huUkyiX4tbaKXQx1co7rTHd7f2u5APTVMzX0C1V9Ezc6l8I+LmjZ9rvQav5N1NgFh9B60qk9QJAb8AK9+aYy7bnBCBJ/BwIkWKYmLoVBi8j8v8UVhVdQMvQxLax41YcD8pbgU5s1O2nxM1+TqeGxrGHG6f7jqxhGWe21I7i8HPvOHNJcW4oycxFC5PNKnXNybEawE23oIDQfIG3+EudQKfAkJ3YhmrB2l+InIo0Wi9BHBIUNPzTldMS53q2teNdZR9UDqASdBdMgp4Uzfs1+LGdE5ExecSQzt4kZ8+o9oo9hmee4AYNOTWefXdip0= mtaguiad@tags-p51
EOT
+ tablet = true
+ tags = "vm"
+ target_node = "tags-p51"
+ unused_disk = (known after apply)
+ vcpus = 0
+ vlan = -1
+ vm_state = "running"
+ vmid = 101
+ disks {
+ scsi {
+ scsi0 {
+ disk {
+ backup = false
+ emulatessd = false
+ format = "raw"
+ id = (known after apply)
+ iops_r_burst = 0
+ iops_r_burst_length = 0
+ iops_r_concurrent = 0
+ iops_wr_burst = 0
+ iops_wr_burst_length = 0
+ iops_wr_concurrent = 0
+ linked_disk_id = (known after apply)
+ mbps_r_burst = 0
+ mbps_r_concurrent = 0
+ mbps_wr_burst = 0
+ mbps_wr_concurrent = 0
+ size = 25
+ storage = "tags-nvme-thin-pool1"
}
}
+ scsi1 {
+ disk {
+ backup = false
+ emulatessd = false
+ format = "raw"
+ id = (known after apply)
+ iops_r_burst = 0
+ iops_r_burst_length = 0
+ iops_r_concurrent = 0
+ iops_wr_burst = 0
+ iops_wr_burst_length = 0
+ iops_wr_concurrent = 0
+ linked_disk_id = (known after apply)
+ mbps_r_burst = 0
+ mbps_r_concurrent = 0
+ mbps_wr_burst = 0
+ mbps_wr_concurrent = 0
+ size = 64
+ storage = "tags-nvme-thin-pool1"
}
}
+ scsi2 {
+ disk {
+ backup = false
+ emulatessd = false
+ format = "raw"
+ id = (known after apply)
+ iops_r_burst = 0
+ iops_r_burst_length = 0
+ iops_r_concurrent = 0
+ iops_wr_burst = 0
+ iops_wr_burst_length = 0
+ iops_wr_concurrent = 0
+ linked_disk_id = (known after apply)
+ mbps_r_burst = 0
+ mbps_r_concurrent = 0
+ mbps_wr_burst = 0
+ mbps_wr_concurrent = 0
+ size = 64
+ storage = "tags-hdd-thin-pool1"
}
}
}
}
+ network {
+ bridge = "vmbr0"
+ firewall = true
+ link_down = false
+ macaddr = (known after apply)
+ model = "virtio"
+ queues = (known after apply)
+ rate = (known after apply)
+ tag = -1
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so OpenTofu can't guarantee to take exactly these actions if you run "tofu apply" now.
```
Notice that a *.tfstate* file is generated, make sure to save or backup this file since it will be necessary when reinitializing/reconfigure or rebuilding your VM/infrastructure.
If all goes well, you'll see at Proxmox GUI the created VM.
[![imagen](/images/prox-tofu/tofu4.png)](/images/prox-tofu/tofu4.png)
<!-- ![tofu](http://chevereto.marktaguiad.dev/images/2024/08/31/tofu4.png) -->
### Opentofu destroy
To delete the VM, run the destroy command.
```
[mcbtaguiad@tags-t470 tofu]$ tofu destroy
proxmox_vm_qemu.test-vm[0]: Refreshing state... [id=tags-p51/qemu/101]
OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- destroy
OpenTofu will perform the following actions:
# proxmox_vm_qemu.test-vm[0] will be destroyed
- resource "proxmox_vm_qemu" "test-vm" {
- additional_wait = 5 -> null
- agent = 1 -> null
- automatic_reboot = true -> null
- balloon = 0 -> null
- bios = "seabios" -> null
- boot = "c" -> null
- bootdisk = "scsi0" -> null
- clone = "debian-20240717-cloudinit-template" -> null
- clone_wait = 10 -> null
- cloudinit_cdrom_storage = "tags-nvme-thin-pool1" -> null
- cores = 8 -> null
- cpu = "host" -> null
- default_ipv4_address = "192.168.254.11" -> null
- define_connection_info = true -> null
- desc = "test-vm-1" -> null
- force_create = false -> null
- full_clone = true -> null
- guest_agent_ready_timeout = 100 -> null
- hotplug = "network,disk,usb" -> null
- id = "tags-p51/qemu/101" -> null
- ipconfig0 = "ip=192.168.254.11/24,gw=192.168.254.254" -> null
- kvm = true -> null
- linked_vmid = 0 -> null
- memory = 8192 -> null
- name = "test-vm-1" -> null
- numa = false -> null
- onboot = false -> null
- oncreate = false -> null
- os_type = "cloud-init" -> null
- preprovision = true -> null
- qemu_os = "other" -> null
- reboot_required = false -> null
- scsihw = "virtio-scsi-pci" -> null
- sockets = 1 -> null
- ssh_host = "192.168.254.11" -> null
- ssh_port = "22" -> null
- sshkeys = <<-EOT
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtf3e9lQR1uAypz4nrq2nDj0DvZZGONku5wO+M87wUVTistrY8REsWO2W1N/v4p2eX30Bnwk7D486jmHGpXFrpHM0EMf7wtbNj5Gt1bDHo76WSci/IEHpMrbdD5vN8wCW2ZMwJG4JC8lfFpUbdmUDWLL21Quq4q9XDx7/ugs1tCZoNybgww4eCcAi7/GAmXcS/u9huUkyiX4tbaKXQx1co7rTHd7f2u5APTVMzX0C1V9Ezc6l8I+LmjZ9rvQav5N1NgFh9B60qk9QJAb8AK9+aYy7bnBCBJ/BwIkWKYmLoVBi8j8v8UVhVdQMvQxLax41YcD8pbgU5s1O2nxM1+TqeGxrGHG6f7jqxhGWe21I7i8HPvOHNJcW4oycxFC5PNKnXNybEawE23oIDQfIG3+EudQKfAkJ3YhmrB2l+InIo0Wi9BHBIUNPzTldMS53q2teNdZR9UDqASdBdMgp4Uzfs1+LGdE5ExecSQzt4kZ8+o9oo9hmee4AYNOTWefXdip0= mtaguiad@tags-p51
EOT -> null
- tablet = true -> null
- tags = "vm" -> null
- target_node = "tags-p51" -> null
- unused_disk = [] -> null
- vcpus = 0 -> null
- vlan = -1 -> null
- vm_state = "running" -> null
- vmid = 101 -> null
- disks {
- scsi {
- scsi0 {
- disk {
- backup = false -> null
- discard = false -> null
- emulatessd = false -> null
- format = "raw" -> null
- id = 0 -> null
- iops_r_burst = 0 -> null
- iops_r_burst_length = 0 -> null
- iops_r_concurrent = 0 -> null
- iops_wr_burst = 0 -> null
- iops_wr_burst_length = 0 -> null
- iops_wr_concurrent = 0 -> null
- iothread = false -> null
- linked_disk_id = -1 -> null
- mbps_r_burst = 0 -> null
- mbps_r_concurrent = 0 -> null
- mbps_wr_burst = 0 -> null
- mbps_wr_concurrent = 0 -> null
- readonly = false -> null
- replicate = false -> null
- size = 25 -> null
- storage = "tags-nvme-thin-pool1" -> null
}
}
- scsi1 {
- disk {
- backup = false -> null
- discard = false -> null
- emulatessd = false -> null
- format = "raw" -> null
- id = 1 -> null
- iops_r_burst = 0 -> null
- iops_r_burst_length = 0 -> null
- iops_r_concurrent = 0 -> null
- iops_wr_burst = 0 -> null
- iops_wr_burst_length = 0 -> null
- iops_wr_concurrent = 0 -> null
- iothread = false -> null
- linked_disk_id = -1 -> null
- mbps_r_burst = 0 -> null
- mbps_r_concurrent = 0 -> null
- mbps_wr_burst = 0 -> null
- mbps_wr_concurrent = 0 -> null
- readonly = false -> null
- replicate = false -> null
- size = 64 -> null
- storage = "tags-nvme-thin-pool1" -> null
}
}
- scsi2 {
- disk {
- backup = false -> null
- discard = false -> null
- emulatessd = false -> null
- format = "raw" -> null
- id = 0 -> null
- iops_r_burst = 0 -> null
- iops_r_burst_length = 0 -> null
- iops_r_concurrent = 0 -> null
- iops_wr_burst = 0 -> null
- iops_wr_burst_length = 0 -> null
- iops_wr_concurrent = 0 -> null
- iothread = false -> null
- linked_disk_id = -1 -> null
- mbps_r_burst = 0 -> null
- mbps_r_concurrent = 0 -> null
- mbps_wr_burst = 0 -> null
- mbps_wr_concurrent = 0 -> null
- readonly = false -> null
- replicate = false -> null
- size = 64 -> null
- storage = "tags-hdd-thin-pool1" -> null
}
}
}
}
- network {
- bridge = "vmbr0" -> null
- firewall = true -> null
- link_down = false -> null
- macaddr = "B2:47:F3:87:C1:83" -> null
- model = "virtio" -> null
- mtu = 0 -> null
- queues = 0 -> null
- rate = 0 -> null
- tag = -1 -> null
}
- smbios {
- uuid = "a08b4d18-4346-4d8d-8fcf-44dddf8fffaf" -> null
}
}
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy all resources?
OpenTofu will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
proxmox_vm_qemu.test-vm[0]: Destroying... [id=tags-p51/qemu/101]
proxmox_vm_qemu.test-vm[0]: Destruction complete after 6s
Destroy complete! Resources: 1 destroyed.
```
### Optional Remote tfstate backup
To remote backup state files, you can look futher for available providers [here](https://opentofu.org/docs/language/settings/).
For this example, we'll be using a kubernetes cluster. The state file will be saved as a secret in the kubernetes cluster.
Configure *main.tf* and add 'terraform_remote_state'.
```
data "terraform_remote_state" "k8s-remote-backup" {
backend = "kubernetes"
config = {
secret_suffix = "k8s-local"
load_config_file = true
namespace = var.k8s_namespace_state
config_path = var.k8s_config_path
}
}
```
Add additional variables.
*variables.tf*
```
variable "k8s_config_path" {
default = "/etc/kubernetes/admin.yaml"
}
variable "k8s_namespace_state" {
default = "default"
}
```
*terraform.tfvars*
```
k8s_config_path = "~/.config/kube/config.yaml"
k8s_namespace_state = "opentofu-state"
```
After `apply` phase, `tofu state` is always triggered and tf state file is automatically created in kubernetes secrets.
```
[mcbtaguiad@tags-t470 tofu]$ kubectl get secret -n opentofu-state
NAME TYPE DATA AGE
tfstate-default-state Opaque 1 3d
```

4
app/content/search.md Executable file
View File

@@ -0,0 +1,4 @@
---
title: "Search"
layout: "search"
---

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
{"Target":"css/styles.a0fc6eb9aaf331cb78088d6e0c8b1cceb7116cdb106546626876b46e1636b5e670e8202f5f08a9b0f9f2f687d1eaa87068f45a8848924d4c77e2a3354e181df2.css","MediaType":"text/css","Data":{"Integrity":"sha512-oPxuuarzMct4CI1uDIsczrcRbNsQZUZiaHa0bhY2teZw6CAvXwipsPny9ofR6qhwaPRaiEiSTUx34qM1Thgd8g=="}}

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 428 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.9 KiB

BIN
app/static/images/bulan/fa.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 339 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 212 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 406 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 126 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 254 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 251 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 301 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 532 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 257 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 313 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 244 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 384 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 360 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 394 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 505 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 412 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 235 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 447 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 469 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 402 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 522 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 494 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 360 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 446 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 368 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 582 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 244 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 359 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 312 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 450 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 458 KiB

BIN
app/static/images/favicon.ico Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

File diff suppressed because it is too large Load Diff

310458
app/static/route/Aurora.gpx Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

22
app/themes/PaperMod/LICENSE Executable file
View File

@@ -0,0 +1,22 @@
MIT License
Copyright (c) 2020 nanxiaobei and adityatelange
Copyright (c) 2021-2023 adityatelange
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,11 @@
.not-found {
position: absolute;
left: 0;
right: 0;
display: flex;
align-items: center;
justify-content: center;
height: 80%;
font-size: 160px;
font-weight: 700;
}

View File

@@ -0,0 +1,44 @@
.archive-posts {
width: 100%;
font-size: 16px;
}
.archive-year {
margin-top: 40px;
}
.archive-year:not(:last-of-type) {
border-bottom: 2px solid var(--border);
}
.archive-month {
display: flex;
align-items: flex-start;
padding: 10px 0;
}
.archive-month-header {
margin: 25px 0;
width: 200px;
}
.archive-month:not(:last-of-type) {
border-bottom: 1px solid var(--border);
}
.archive-entry {
position: relative;
padding: 5px;
margin: 10px 0;
}
.archive-entry-title {
margin: 5px 0;
font-weight: 400;
}
.archive-count,
.archive-meta {
color: var(--secondary);
font-size: 14px;
}

View File

@@ -0,0 +1,60 @@
.footer,
.top-link {
font-size: 12px;
color: var(--secondary);
}
.footer {
max-width: calc(var(--main-width) + var(--gap) * 2);
margin: auto;
padding: calc((var(--footer-height) - var(--gap)) / 2) var(--gap);
text-align: center;
line-height: 24px;
}
.footer span {
margin-inline-start: 1px;
margin-inline-end: 1px;
}
.footer span:last-child {
white-space: nowrap;
}
.footer a {
color: inherit;
border-bottom: 1px solid var(--secondary);
}
.footer a:hover {
border-bottom: 1px solid var(--primary);
}
.top-link {
visibility: hidden;
position: fixed;
bottom: 60px;
right: 30px;
z-index: 99;
background: var(--tertiary);
width: 42px;
height: 42px;
padding: 12px;
border-radius: 64px;
transition: visibility 0.5s, opacity 0.8s linear;
}
.top-link,
.top-link svg {
filter: drop-shadow(0px 0px 0px var(--theme));
}
.footer a:hover,
.top-link:hover {
color: var(--primary);
}
.top-link:focus,
#theme-toggle:focus {
outline: 0;
}

View File

@@ -0,0 +1,93 @@
.nav {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
max-width: calc(var(--nav-width) + var(--gap) * 2);
margin-inline-start: auto;
margin-inline-end: auto;
line-height: var(--header-height);
}
.nav a {
display: block;
}
.logo,
#menu {
display: flex;
margin: auto var(--gap);
}
.logo {
flex-wrap: inherit;
}
.logo a {
font-size: 24px;
font-weight: 700;
}
.logo a img, .logo a svg {
display: inline;
vertical-align: middle;
pointer-events: none;
transform: translate(0, -10%);
border-radius: 6px;
margin-inline-end: 8px;
}
button#theme-toggle {
font-size: 26px;
margin: auto 4px;
}
body.dark #moon {
vertical-align: middle;
display: none;
}
body:not(.dark) #sun {
display: none;
}
#menu {
list-style: none;
word-break: keep-all;
overflow-x: auto;
white-space: nowrap;
}
#menu li + li {
margin-inline-start: var(--gap);
}
#menu a {
font-size: 16px;
}
#menu .active {
font-weight: 500;
border-bottom: 2px solid currentColor;
}
.lang-switch li,
.lang-switch ul,
.logo-switches {
display: inline-flex;
margin: auto 4px;
}
.lang-switch {
display: flex;
flex-wrap: inherit;
}
.lang-switch a {
margin: auto 3px;
font-size: 16px;
font-weight: 500;
}
.logo-switches {
flex-wrap: inherit;
}

View File

@@ -0,0 +1,68 @@
.main {
position: relative;
min-height: calc(100vh - var(--header-height) - var(--footer-height));
max-width: calc(var(--main-width) + var(--gap) * 2);
margin: auto;
padding: var(--gap);
}
.page-header h1 {
font-size: 40px;
}
.pagination {
display: flex;
}
.pagination a {
color: var(--theme);
font-size: 13px;
line-height: 36px;
background: var(--primary);
border-radius: calc(36px / 2);
padding: 0 16px;
}
.pagination .next {
margin-inline-start: auto;
}
.social-icons {
padding: 12px 0;
}
.social-icons a:not(:last-of-type) {
margin-inline-end: 12px;
}
.social-icons a svg {
height: 26px;
width: 26px;
}
code {
direction: ltr;
}
div.highlight,
pre {
position: relative;
}
.copy-code {
display: none;
position: absolute;
top: 4px;
right: 4px;
color: rgba(255, 255, 255, 0.8);
background: rgba(78, 78, 78, 0.8);
border-radius: var(--radius);
padding: 0 5px;
font-size: 14px;
user-select: none;
}
div.highlight:hover .copy-code,
pre:hover .copy-code {
display: block;
}

View File

@@ -0,0 +1,104 @@
.first-entry {
position: relative;
display: flex;
flex-direction: column;
justify-content: center;
min-height: 320px;
margin: var(--gap) 0 calc(var(--gap) * 2) 0;
}
.first-entry .entry-header {
overflow: hidden;
display: -webkit-box;
-webkit-box-orient: vertical;
-webkit-line-clamp: 3;
}
.first-entry .entry-header h1 {
font-size: 34px;
line-height: 1.3;
}
.first-entry .entry-content {
margin: 14px 0;
font-size: 16px;
-webkit-line-clamp: 3;
}
.first-entry .entry-footer {
font-size: 14px;
}
.home-info .entry-content {
-webkit-line-clamp: unset;
}
.post-entry {
position: relative;
margin-bottom: var(--gap);
padding: var(--gap);
background: var(--entry);
border-radius: var(--radius);
transition: transform 0.1s;
border: 1px solid var(--border);
}
.post-entry:active {
transform: scale(0.96);
}
.tag-entry .entry-cover {
display: none;
}
.entry-header h2 {
font-size: 24px;
line-height: 1.3;
}
.entry-content {
margin: 8px 0;
color: var(--secondary);
font-size: 14px;
line-height: 1.6;
overflow: hidden;
display: -webkit-box;
-webkit-box-orient: vertical;
-webkit-line-clamp: 2;
}
.entry-footer {
color: var(--secondary);
font-size: 13px;
}
.entry-link {
position: absolute;
left: 0;
right: 0;
top: 0;
bottom: 0;
}
.entry-cover,
.entry-isdraft {
font-size: 14px;
color: var(--secondary);
}
.entry-cover {
margin-bottom: var(--gap);
text-align: center;
}
.entry-cover img {
border-radius: var(--radius);
pointer-events: none;
width: 100%;
height: auto;
}
.entry-cover a {
color: var(--secondary);
box-shadow: 0 1px 0 var(--primary);
}

View File

@@ -0,0 +1,403 @@
.page-header,
.post-header {
margin: 24px auto var(--content-gap) auto;
}
.post-title {
margin-bottom: 2px;
font-size: 40px;
}
.post-description {
margin-top: 10px;
margin-bottom: 5px;
}
.post-meta,
.breadcrumbs {
color: var(--secondary);
font-size: 14px;
display: flex;
flex-wrap: wrap;
}
.post-meta .i18n_list li {
display: inline-flex;
list-style: none;
margin: auto 3px;
box-shadow: 0 1px 0 var(--secondary);
}
.breadcrumbs a {
font-size: 16px;
}
.post-content {
color: var(--content);
}
.post-content h3,
.post-content h4,
.post-content h5,
.post-content h6 {
margin: 24px 0 16px;
}
.post-content h1 {
margin: 40px auto 32px;
font-size: 40px;
}
.post-content h2 {
margin: 32px auto 24px;
font-size: 32px;
}
.post-content h3 {
font-size: 24px;
}
.post-content h4 {
font-size: 16px;
}
.post-content h5 {
font-size: 14px;
}
.post-content h6 {
font-size: 12px;
}
.post-content a,
.toc a:hover {
box-shadow: 0 1px 0;
box-decoration-break: clone;
-webkit-box-decoration-break: clone;
}
.post-content a code {
margin: auto 0;
border-radius: 0;
box-shadow: 0 -1px 0 var(--primary) inset;
}
.post-content del {
text-decoration: none;
background: linear-gradient(to right, var(--primary) 100%, transparent 0) 0 50%/1px 1px repeat-x;
}
.post-content dl,
.post-content ol,
.post-content p,
.post-content figure,
.post-content ul {
margin-bottom: var(--content-gap);
}
.post-content ol,
.post-content ul {
padding-inline-start: 20px;
}
.post-content li {
margin-top: 5px;
}
.post-content li p {
margin-bottom: 0;
}
.post-content dl {
display: flex;
flex-wrap: wrap;
margin: 0;
}
.post-content dt {
width: 25%;
font-weight: 700;
}
.post-content dd {
width: 75%;
margin-inline-start: 0;
padding-inline-start: 10px;
}
.post-content dd ~ dd,
.post-content dt ~ dt {
margin-top: 10px;
}
.post-content table {
margin-bottom: 32px;
}
.post-content table th,
.post-content table:not(.highlighttable, .highlight table, .gist .highlight) td {
min-width: 80px;
padding: 12px 8px;
line-height: 1.5;
border-bottom: 1px solid var(--border);
}
.post-content table th {
font-size: 14px;
text-align: start;
}
.post-content table:not(.highlighttable) td code:only-child {
margin: auto 0;
}
.post-content .highlight table {
border-radius: var(--radius);
}
.post-content .highlight:not(table) {
margin: 10px auto;
background: var(--hljs-bg) !important;
border-radius: var(--radius);
direction: ltr;
}
.post-content li > .highlight {
margin-inline-end: 0;
}
.post-content ul pre {
margin-inline-start: calc(var(--gap) * -2);
}
.post-content .highlight pre {
margin: 0;
}
.post-content .highlighttable {
table-layout: fixed;
}
.post-content .highlighttable td:first-child {
width: 40px;
}
.post-content .highlighttable td .linenodiv {
padding-inline-end: 0 !important;
}
.post-content .highlighttable td .highlight,
.post-content .highlighttable td .linenodiv pre {
margin-bottom: 0;
}
.post-content code {
margin: auto 4px;
padding: 4px 6px;
font-size: 0.78em;
line-height: 1.5;
background: var(--code-bg);
border-radius: 2px;
}
.post-content pre code {
display: block;
margin: auto 0;
padding: 10px;
color: rgb(213, 213, 214);
background: var(--hljs-bg) !important;
border-radius: var(--radius);
overflow-x: auto;
word-break: break-all;
}
.post-content blockquote {
margin: 20px 0;
padding: 0 14px;
border-inline-start: 3px solid var(--primary);
}
.post-content hr {
margin: 30px 0;
height: 2px;
background: var(--tertiary);
border: 0;
}
.post-content iframe {
max-width: 100%;
}
.post-content img {
border-radius: 4px;
margin: 1rem 0;
}
.post-content img[src*="#center"] {
margin: 1rem auto;
}
.post-content figure.align-center {
text-align: center;
}
.post-content figure > figcaption {
color: var(--primary);
font-size: 16px;
font-weight: bold;
margin: 8px 0 16px;
}
.post-content figure > figcaption > p {
color: var(--secondary);
font-size: 14px;
font-weight: normal;
}
.toc {
margin: 0 2px 40px 2px;
border: 1px solid var(--border);
background: var(--code-bg);
border-radius: var(--radius);
padding: 0.4em;
}
.dark .toc {
background: var(--entry);
}
.toc details summary {
cursor: zoom-in;
margin-inline-start: 20px;
}
.toc details[open] summary {
cursor: zoom-out;
}
.toc .details {
display: inline;
font-weight: 500;
}
.toc .inner {
margin: 0 20px;
padding: 10px 20px;
}
.toc li ul {
margin-inline-start: var(--gap);
}
.toc summary:focus {
outline: 0;
}
.post-footer {
margin-top: 56px;
}
.post-tags li {
display: inline-block;
margin-inline-end: 3px;
margin-bottom: 5px;
}
.post-tags a,
.share-buttons,
.paginav {
border-radius: var(--radius);
background: var(--code-bg);
border: 1px solid var(--border);
}
.post-tags a {
display: block;
padding-inline-start: 14px;
padding-inline-end: 14px;
color: var(--secondary);
font-size: 14px;
line-height: 34px;
background: var(--code-bg);
}
.post-tags a:hover,
.paginav a:hover {
background: var(--border);
}
.share-buttons {
margin: 14px 0;
padding-inline-start: var(--radius);
display: flex;
justify-content: center;
overflow-x: auto;
}
.share-buttons a {
margin-top: 10px;
}
.share-buttons a:not(:last-of-type) {
margin-inline-end: 12px;
}
h1:hover .anchor,
h2:hover .anchor,
h3:hover .anchor,
h4:hover .anchor,
h5:hover .anchor,
h6:hover .anchor {
display: inline-flex;
color: var(--secondary);
margin-inline-start: 8px;
font-weight: 500;
user-select: none;
}
.paginav {
margin: 10px 0;
display: flex;
line-height: 30px;
border-radius: var(--radius);
}
.paginav a {
padding-inline-start: 14px;
padding-inline-end: 14px;
border-radius: var(--radius);
}
.paginav .title {
letter-spacing: 1px;
text-transform: uppercase;
font-size: small;
color: var(--secondary);
}
.paginav .prev,
.paginav .next {
width: 50%;
}
.paginav span:hover:not(.title) {
box-shadow: 0 1px 0;
}
.paginav .next {
margin-inline-start: auto;
text-align: right;
}
[dir="rtl"] .paginav .next {
text-align: left;
}
h1>a>svg {
display: inline;
}
img.in-text {
display: inline;
margin: auto;
}

Some files were not shown because too many files have changed in this diff Show More