Kubelog brings Kubernetes logs into Backstage. It adds a tab on each entity page where you can stream logs from the pods that belong to that service. It connects to Kwirth, a simple log exporter that runs as a single pod. Kubelog is the frontend. Kwirth provides the data.
With Kubelog you avoid context switches. Your developers can search for an entity in Backstage and see logs from the clusters you have registered. The plugin maps entities to pods using the standard kubernetes id labels. It can show namespaces per cluster. Logs refresh in real time.
Teams can get limited operations in a safe way. Starting with version 0.9, Kubelog can restart pods when the user has permission. It is meant for quick recovery tasks. It is not a full operations console. Access is controlled in the backend. You can scope who sees which pods, namespaces, or clusters. You can grant view rights without restart rights.
Use it to speed up triage, to validate deployments, or to share visibility with partner teams. Keep engineers inside Backstage during on call work. Reduce the need for kubectl or cluster wide tools for everyday checks.
Installation Instructions
These instructions apply to self-hosted Backstage only.
Install the backend and frontend packages
# from your Backstage root
yarn --cwd packages/backend add @jfvilas/plugin-kubelog-backend
yarn --cwd packages/app add @jfvilas/plugin-kubelog @jfvilas/plugin-kubelog-common
Review Kubelog and Kwirth version compatibility before you pick versions
Examples
0.11.6 with 0.4.20
0.11.1 with 0.3.160
0.10.1 with 0.2.213
0.9.5 with 0.2.8
You must install Kwirth in every Kubernetes cluster you want to read logs from. Install Kwirth following its project docs.
Wire the backend on the new backend system
If your Backstage backend uses the new backend system with createBackend, add a small module that mounts the Kubelog router. This works even if the package does not ship a native module.
// file packages/backend/src/plugins/kubelogModule.ts
import { createBackendModule, coreServices } from '@backstage/backend-plugin-api';
import { createRouter } from '@jfvilas/plugin-kubelog-backend';
export const kubelogModule = createBackendModule({
pluginId: 'kubelog',
moduleId: 'router',
register(env) {
env.registerInit({
deps: {
httpRouter: coreServices.httpRouter,
logger: coreServices.logger,
config: coreServices.rootConfig,
discovery: coreServices.discovery,
auth: coreServices.auth,
permissions: coreServices.permissions,
tokenManager: coreServices.tokenManager,
},
async init({ httpRouter, logger, config, discovery, auth, permissions, tokenManager }) {
const router = await createRouter({
logger,
config,
discovery,
auth,
permissions,
tokenManager,
});
httpRouter.use('/kubelog', router);
},
});
},
});
Register the module in your backend entry point.
// file packages/backend/src/index.ts
import { createBackend } from '@backstage/backend-defaults';
import { kubelogModule } from './plugins/kubelogModule';
const backend = createBackend();
backend.add(kubelogModule());
backend.start();
Wire the backend on the legacy backend
Create the plugin wiring that mounts the Kubelog router.
// file packages/backend/src/plugins/kubelog.ts
import { createRouter } from '@jfvilas/plugin-kubelog-backend';
import { Router } from 'express';
import { PluginEnvironment } from '../types';
export default async function createPlugin(env: PluginEnvironment): Promise<Router> {
return await createRouter({
logger: env.logger,
config: env.config,
discovery: env.discovery,
auth: env.auth,
permissions: env.permissions,
tokenManager: env.tokenManager,
});
}
Mount it in your backend index. The exact file can vary by app template. This shows the pattern.
// file packages/backend/src/index.ts
import kubelog from './plugins/kubelog';
// create or reuse your plugin env as in your repo
const kubelogEnv = useHotMemoize(module, () => createEnv('kubelog'));
const router = await kubelog(kubelogEnv);
apiRouter.use('/kubelog', router);
If your repo does not have useHotMemoize or createEnv helpers, follow the same pattern you use for other backend routers in your app.
Configure the backend
All configuration and permissions live in app config or env. The backend controls access to clusters, namespaces, pods, and restart. Viewing permissions and restart permissions are different.
Put your config in app config files. Keep secrets in env.
# file app-config.yaml
# adjust keys to match the backend plugin config
kubelog:
# example shape, replace with the real keys from the backend plugin
# clusters will usually match the clusters you added to the Backstage Kubernetes plugin
clusters:
- name: prod
kwirthUrl: https://kwirth.your-prod.example
readKey: ${KUBELOG_PROD_READ}
restartKey: ${KUBELOG_PROD_RESTART}
- name: staging
kwirthUrl: https://kwirth.your-staging.example
readKey: ${KUBELOG_STAGING_READ}
restartKey: ${KUBELOG_STAGING_RESTART}
Set the env variables in your runtime environment or in app config local files.
Add the Kubelog tab to your entity pages
Import the plugin components in your app package and add a tab.
// file packages/app/src/components/catalog/EntityPage.tsx
import React from 'react';
import { EntityLayout } from '@backstage/plugin-catalog';
import { EntityKubelogContent, isKubelogAvailable } from '@jfvilas/plugin-kubelog';
// add the route inside the page layout
const serviceEntityPage = (
<EntityLayout>
{/* other tabs */}
<EntityLayout.Route if={isKubelogAvailable} path="/kubelog" title="Kubelog">
<EntityKubelogContent />
</EntityLayout.Route>
</EntityLayout>
);
export default serviceEntityPage;
You can add the same tab to other entity page variants in your app if you have them.
Tag your Backstage entities
Add the Kubernetes id annotation to entities you want to browse logs for.
# file catalog-info.yaml
metadata:
annotations:
backstage.io/kubernetes-id: your-entity-name
Label your Kubernetes workloads
Add the same label to your Kubernetes objects so the plugin can link them to the entity.
apiVersion: apps/v1
kind: Deployment
metadata:
name: your-app
labels:
backstage.io/kubernetes-id: your-entity-name
spec:
selector:
matchLabels:
app: your-app
template:
metadata:
labels:
app: your-app
backstage.io/kubernetes-id: your-entity-name
spec:
containers:
- name: your-app
image: your-oci-image
Make sure the label is on the deployment and on the pod template.
Restart Backstage
yarn --cwd packages/backend start
yarn --cwd packages/app start
Notes
- The frontend calls the backend to resolve pods and to get scoped keys for view and restart
- The backend applies permissions, then talks to Kwirth in every configured cluster
- Keep Kwirth reachable from your Backstage backend network and keep keys secure
Changelog
This changelog is produced from commits made to the Kubelog plugin since 6 months ago, and based on the code located here. It may not contain information about all commits. Releases and version bumps are intentionally omitted. This changelog is generated by AI.
Set up Backstage in minutes with Roadie
Focus on using Backstage, rather than building and maintaining it.