By: Ben Lykins, RPT Solutions Architect
The following will walk through the necessary steps to deploy NueVector via Helm. This can be done locally or on a virtual machine. I am using minikube to test on, but K3S/MicroK8s or any other distros will work. Since this is going to be scaled down, we will also limit replicas. The purpose of this guide is for testing and not intended for any production usage. Consult the official documentation for more information : SUSE NeuVector Docs.
What is SUSE NeuVector?
SUSE NeuVector, the leader in Full Lifecycle Container Security, delivers uncompromising end-to-end security for modern container infrastructures. SUSE NeuVector offers a cloud-native Kubernetes security platform with end-to-end vulnerability management, automated CI/CD pipeline security, and complete run-time security, including the industry’s only container firewall to block zero days and other threats.
What is Multipass?
Multipass is a tool to generate cloud-style Ubuntu VMs quickly on Linux, macOS, and Windows.It gives you a simple but powerful CLI that allows you to quickly access an Ubuntu command line or create your own local mini-cloud. Developers can use Multipass to prototype cloud deployments and to create fresh, customized Linux dev environments on any machine. Mac and Windows users can use Multipass as the quickest way to get an Ubuntu command line on their system. New Ubuntu users can use it as a sandbox to try new things without affecting their host machine, and without the need to dual boot.
Required:
multipass
, which can launch an instance with minikube already installed.Since I have multipass installed, I will launch a new vm using the existing minikube image.
Run:
multipass launch -c 8 -m 16G -n demo minikube
Once completed, you should get a launched.
multipass launch -c 8 -m 16G -n demo minikube
Launched: demo
Running a multipass list
, will output all the launched virtual machines.
demo Running 192.168.64.20 Ubuntu 22.04 LTS
172.17.0.1
192.168.49.1
Connect to the virtual machine, you will run multipass shell demo
.
Following is an example when you shell into the VM:
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-92-generic aarch64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro
System information as of Thu Feb 29 09:16:40 EST 2024
System load: 1.5546875
Usage of /: 13.2% of 38.59GB
Memory usage: 6%
Swap usage: 0%
Processes: 199
Users logged in: 0
IPv4 address for br-1746f5f95e03: 192.168.49.1
IPv4 address for docker0: 172.17.0.1
IPv4 address for enp0s1: 192.168.64.20
IPv6 address for enp0s1: fd3c:28b:5cc5:4064:5054:ff:fe87:5be
minikube is already started on the new instance; however, I am going to bump up CPUs and Memory for it.
If needing to install minikube, check out the documentation.
Run:
minikube stop
Example Output:
ubuntu@demo:~$ minikube stop
✋ Stopping node "minikube" ...
🛑 Powering off "minikube" via SSH ...
🛑 1 node stopped.
Run:
minikube config set cpus 4
Example Output:
ubuntu@demo:~$ minikube config set cpus 4
❗ These changes will take effect upon a minikube delete and then a minikube start
Run:
minikube config set memory 8192
Example Output:
ubuntu@demo:~$ minikube config set memory 8192
❗ These changes will take effect upon a minikube delete and then a minikube start
In order for the configuration changes to be made, minikube needs to be deleted and recreated.
Run:
minikube delete
Example Output:
ubuntu@demo:~$ minikube delete
🔥 Deleting "minikube" in docker ...
🔥 Deleting container "minikube" ...
🔥 Removing /home/ubuntu/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
Run:
minikube start
Example Output:
ubuntu@demo:~$ minikube start
😄 minikube v1.32.0 on Ubuntu 22.04 (arm64)
✨ Automatically selected the docker driver. Other choices: ssh, none
📌 Using Docker driver with root privileges
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=4, Memory=8192MB) ...
🐳 Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring bridge CNI (Container Networking Interface) ...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
minikube should be up running, once connected. Check its status by doing the following.
Run:
minikube status
Example Output:
ubuntu@demo:~$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
If looking to play with minikube more, there are additional add-ons which can be installed, in this case, we will leave the defaults, but metrics-server and dashboard are typical.
This image also comes with kubectl setup:
ubuntu@demo:~$ kubectl version
Client Version: v1.28.7
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.3
Helm is not installed, but can be quickly set up:
ubuntu@demo:~$ helm version
Command 'helm' not found, but can be installed with:
sudo snap install helm
To install, run:
sudo snap install helm --classic
Example Output:
ubuntu@demo:~$ sudo snap install helm --classic
Download snap "core22" (1125) from channel "stable"
Once install is complete, you can check the version with helm version
:
ubuntu@demo:~$ helm version
version.BuildInfo{Version:"v3.14.2", GitCommit:"c309b6f0ff63856811846ce18f3bdc93d2b4d54b", GitTreeState:"clean", GoVersion:"go1.21.7"}
Add the helm repo, run:
helm repo add neuvector https://neuvector.github.io/neuvector-helm/
For this, I’m going to use the latest version, but other older versions and development version can be listed:
helm search repo neuvector --devel -l
When this was originally written, the latest as of 29 February 2024 — Leap Day!:
ubuntu@demo:~$ helm search repo neuvector
NAME CHART VERSION APP VERSION DESCRIPTION
neuvector/core 2.7.3 5.3.0 Helm chart for NeuVector's core services
neuvector/crd 2.7.3 5.3.0 Helm chart for NeuVector's CRD services
neuvector/monitor 2.7.3 5.3.0 Helm chart for NeuVector monitor services
Helm Install:
For setting up NeuVector, it is simple enough that I will keep most of the default values. I am updating the controller and scanner replicas, if leaving the defaults it will nuke your system since minikube is running a single node. This is fine for local and development environments, run the following:
helm upgrade --install neuvector neuvector/core --version 2.7.6 \
--set tag=5.3.2 \
--set controller.replicas=1 \
--set cve.scanner.replicas=1 \
--create-namespace \
--namespace neuvector
The readme for the repository will provide additional configuration options:
When running:
ubuntu@demo:~$ helm upgrade --install neuvector neuvector/core --version 2.7.6 \
--set tag=5.3.2 \
--set controller.replicas=1 \
--set cve.scanner.replicas=1 \
--create-namespace \
--namespace neuvector
Release "neuvector" does not exist. Installing it now.
NAME: neuvector
LAST DEPLOYED: Thu Feb 29 09:34:30 2024
NAMESPACE: neuvector
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Get the NeuVector URL by running these commands:
NODE_PORT=$(kubectl get --namespace neuvector -o jsonpath="{.spec.ports[0].nodePort}" services neuvector-service-webui)
NODE_IP=$(kubectl get nodes --namespace neuvector -o jsonpath="{.items[0].status.addresses[0].address}")
echo https://$NODE_IP:$NODE_PORT
After running helm to install NeuVector, it will take some time for all of the nodes to come up and be stable. When all the pods are up and running and stable, then we should be good to try connecting.
Run:
kubectl get pods -n neuvector
Example Output:
NAME READY STATUS RESTARTS AGE
neuvector-controller-pod-554d868cbd-4sk54 1/1 Running 0 3m15s
neuvector-enforcer-pod-gqhsv 1/1 Running 2 (63s ago) 3m15s
neuvector-manager-pod-8589675984-7pl2j 1/1 Running 0 3m15s
neuvector-scanner-pod-5bb668cc99-r7vkq 1/1 Running 0 3m15
I am going to port-forward this and access it from my local browser. On the virtual machine, run the following command.
kubectl port-forward --address 0.0.0.0 --namespace neuvector service/neuvector-service-webui 8443
Example Output:
ubuntu@demo:~$ kubectl port-forward --address 0.0.0.0 --namespace neuvector service/neuvector-service-webui 8443
Forwarding from 0.0.0.0:8443 -> 8443
This will listen on port 8443 on all addresses (0.0.0.0) and forward to the service : neuvector-service-webui.
On you local browser, go to the following, https://ipaddress:8443
.
Please Note: the IP Address I pulled is the virtual machine’s private IP address. This can be checked again using
multipass list
.
multipass list
Example Output:
Name State IPv4 Image
demo Running 192.168.64.20 Ubuntu 24.04 LTS
172.17.0.1
192.168.49.1
Since this is a self-signed certificate, you can ignore the warnings and proceed.
By default, username and password are admin:admin.
Check off on the EULA and you can login.
And voila, update admin password if you plan will continue to use this and you are done.
If looking to test NeuVector a bit more, we will add a MySQL service and run scans on containers and nodes with the NeuVector console.
Add the bitnami repo:
helm repo add bitnami https://charts.bitnami.com/bitnami
Install:
helm install bitnami/mysql --generate-name
Go to Assets in the navigation pane on the left and select the dropdown. From the dropdown, select containers.
Scans will schedule and return back results on completed. Depending on the amount of resources, both scanners and containers, it could take time. Since this is a new cluster, it is relatively quick.
You can filter and view the vulnerabilities which are found:
You can see the nodes are also scanned as well for vulnerabilities.
That is about it, a quick and easy way to test out NeuVector. This is really just scratching the surface when it comes to what features and solutions it offers.
No matter what industry you’re in, cyberattacks and data breaches are a daily threat. That’s why it’s so vital for DevOps and DevSecOps teams to protect sensitive data and secure access to business-critical resources. As far too many a corporate victim has learned, traditional security methods often aren’t enough. They leave organizations vulnerable to breaches, hinder agility, and place unnecessary burdens on IT teams.
This is where the power of using HashiCorp’s Vault and Boundary in concert with each other emerges, providing a sophisticated security and access management solution that goes beyond cost savings. Together, they enable organizations to greatly improve security, efficiency, and enhance the user experience. However, seamlessly integrating these two powerful tools requires careful planning and consideration. That’s why we’ve pulled together some expert tips to assist with the journey.
When integrating HashiCorp Vault and Boundary, we suggest you keep in mind these crucial aspects to ensure a smooth and successful implementation.
Vault: Provides organizations with identity-based security to automatically authenticate and authorize access to secrets and other sensitive data. It offers a centralized platform for storing, managing, and accessing secrets like passwords, API keys, and certificates. Vault enforces access control through granular policies, ensuring users only have the specific permissions they need for their tasks.
Boundary: Think of Boundary as a vigilant gatekeeper, meticulously controlling access to resources based on pre-defined policies and user identities. It acts as a session management layer, facilitating secure connections between users and target resources like databases, applications, and servers. Boundary leverages Vault for dynamic credential generation and access control enforcement, ensuring users only possess the necessary credentials for the duration of their session. Built for cloud-native environments, modern privileged access management from HashiCorp Boundary uses identity-driven controls to secure user access across dynamic environments.
By understanding these distinct roles, we can begin to visualize how these two tools can work together to create a secure and efficient access management solution.
Before starting the integration journey, planning and design is key. Here are key aspects to consider:
With a clear understanding of user roles and access needs, you can configure secure authentication and authorization mechanisms. Key elements to consider:
Securely managing secrets is critical for protecting sensitive information and preventing unauthorized access. Key considerations for integrating Vault and Boundary:
Controlling user sessions is crucial for maintaining a secure environment. When integrating Vault and Boundary, keep these things in mind:
Implementing comprehensive monitoring and auditing capabilities is essential for maintaining visibility into user activity and identifying potential threats. When Boundary and Vault work together, these things are to be considered:
When integrating Vault and Boundary there are many elements that need to be considered and prioritized for the journey. HashiCorp Terraform can be used for efficient configuration management and infrastructure deployment. Consider configuring Vault and Boundary for high availability to ensure resilience and minimize downtime.
To get the most out of your investment with custom tips and strategy, reach out to RPT. We provide a tailored 360-degree approach that addresses your specific environment and requirements. Once our team of experts meticulously analyzes all pertinent information and carefully considers every relevant aspect, we are ready to craft exciting and innovative solutions tailored to your unique needs and circumstances.
Summary
By following these essential considerations and best practices you can successfully integrate HashiCorp Vault and Boundary to better protect your organization against external and internal threats. The result can be a more secure and efficient access management ecosystem that empowers your organization to thrive in the ever-changing digital landscape. Remember, security is a continuous journey, not a destination. As technology evolves, so do the techniques used by hackers and cybercriminals. That’s why it’s imperative for DevOps and DevSecOps teams to regularly review and update their organization’s security practices to stay ahead of the threat landscape.
For more tips on how to maximize your investment in Vault and Boundary, read this.
Need help maximizing the benefits of using Vault & Boundary? Contact the experts at RPT. As HashiCorp’s 2023 Global Competency of the Year and the only HashiCorp partner with all 3 certifications (Security, Infrastructure, & Networking), you know you’re working the leading HashiCorp services partner. Contact [email protected] today.
About River Point Technology
River Point Technology (RPT) is an award-winning cloud and DevOps service provider that helps Fortune 500 companies accelerate digital transformation and redefine what is possible. Our passionate team of engineers and architects simplify the deployment, integration, and management of emerging technology by delivering state-of-the-art custom solutions. We further position organizations to experience Day 2 success at scale and realize the value of their technology investments by offering best-in-class enablement opportunities. These include the subscription-based RPT Resident Accelerator program that’s designed to help enterprises manage the day-to-day operations of an advanced tech stack, the just-launched RPT Connect App, and our expert-led training classes. Founded in 2011, our unique approach to evaluating and adopting emerging technology is based on our proprietary and proven Value Creation Technology process that empowers IT teams to boldly take strategic risks that result in measurable business impact. What’s your vision? Contact River Point Technology today and see what’s possible.
As organizations manage their digital transformation initiatives in today’s business world, their technology investments are often viewed under a microscope. Do they align with strategic objectives? Do they support the company’s innovation goals? Will they pose a business risk? And of course, how cost effective is the investment? Is it a financially responsible choice and what’s the ROI? When it comes to costs associated with protecting an organization’s vital resources, infrastructure, and data, many CTOs, CISOs, and CIOs have to weigh the cost of investment in new technologies vs relying on legacy systems that actually increase their exposure to nefarious actors.
The simple truth is that the typical security methods employed by many enterprises today often come with hidden costs: potential for costly breaches, increased vulnerability, wasted man-hours, and operational inefficiencies. This is where HashiCorp Vault and Boundary come in. When leveraged together they deliver a host of benefits, including cost optimization, streamlined workflows, simplified compliance, and of course they reduce the risk and minimize the financial impact of security incidents.
Before exploring how Vault and Boundary save money, let’s examine the cost burdens associated with traditional security approaches:
By integrating Vault and Boundary, organizations can unlock cost-saving benefits without compromising their organization’s security posture:
Long-Term Value Proposition:
While the upfront costs of acquiring and implementing Vault and Boundary should be considered, the opportunity for cost savings over the long term through increased efficiency, reduced downtime, improved security, and simplified compliance makes the investment financially sound.
By implementing HashiCorp Vault and Boundary together, organizations can optimize their financial investment in both platforms. Through automation, centralization, and streamlined workflows, these powerful tools empower organizations to achieve a balance between robust security and financial sustainability, paving the way for an organization to achieve long-term success in the ever-evolving digital world.
Need help maximizing the benefits of using Vault & Boundary? Contact the experts at RPT. As HashiCorp’s 2023 Global Competency of the Year and the only HashiCorp partner with all 3 certifications (Security, Infrastructure, & Networking), you know you’re working the leading HashiCorp services partner. Contact [email protected] today.
About River Point Technology
River Point Technology (RPT) is an award-winning cloud and DevOps service provider that helps Fortune 500 companies accelerate digital transformation and redefine what is possible. Our passionate team of engineers and architects simplify the deployment, integration, and management of emerging technology by delivering state-of-the-art custom solutions. We further position organizations to experience Day 2 success at scale and realize the value of their technology investments by offering best-in-class enablement opportunities. These include the subscription-based RPT Resident Accelerator program that’s designed to help enterprises manage the day-to-day operations of an advanced tech stack, the just-launched RPT Connect App, and our expert-led training classes. Founded in 2011, our unique approach to evaluating and adopting emerging technology is based on our proprietary and proven Value Creation Technology process that empowers IT teams to boldly take strategic risks that result in measurable business impact. What’s your vision? Contact River Point Technology today and see what’s possible.