By: Ben Lykins, RPT Solutions Architect

Introduction

The following will walk through the necessary steps to deploy NueVector via Helm. This can be done locally or on a virtual machine. I am using minikube to test on, but K3S/MicroK8s or any other distros will work. Since this is going to be scaled down, we will also limit replicas. The purpose of this guide is for testing and not intended for any production usage. Consult the official documentation for more information : SUSE NeuVector Docs.

What is SUSE NeuVector?

SUSE NeuVector, the leader in Full Lifecycle Container Security, delivers uncompromising end-to-end security for modern container infrastructures. SUSE NeuVector offers a cloud-native Kubernetes security platform with end-to-end vulnerability management, automated CI/CD pipeline security, and complete run-time security, including the industry’s only container firewall to block zero days and other threats.

What is Multipass?

Multipass is a tool to generate cloud-style Ubuntu VMs quickly on Linux, macOS, and Windows.It gives you a simple but powerful CLI that allows you to quickly access an Ubuntu command line or create your own local mini-cloud. Developers can use Multipass to prototype cloud deployments and to create fresh, customized Linux dev environments on any machine. Mac and Windows users can use Multipass as the quickest way to get an Ubuntu command line on their system. New Ubuntu users can use it as a sandbox to try new things without affecting their host machine, and without the need to dual boot.

Prerequisites

Required:

Set up Virtual Machine

Since I have multipass installed, I will launch a new vm using the existing minikube image.

Run:

multipass launch -c 8 -m 16G -n demo minikube

Once completed, you should get a launched.

multipass launch -c 8 -m 16G -n demo minikube                                    
Launched: demo

Running a multipass list, will output all the launched virtual machines.

demo                    Running           192.168.64.20    Ubuntu 22.04 LTS
                                          172.17.0.1
                                          192.168.49.1

NeuVector Setup

Connect to the virtual machine, you will run multipass shell demo.

Following is an example when you shell into the VM:

Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-92-generic aarch64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro

  System information as of Thu Feb 29 09:16:40 EST 2024

  System load:                      1.5546875
  Usage of /:                       13.2% of 38.59GB
  Memory usage:                     6%
  Swap usage:                       0%
  Processes:                        199
  Users logged in:                  0
  IPv4 address for br-1746f5f95e03: 192.168.49.1
  IPv4 address for docker0:         172.17.0.1
  IPv4 address for enp0s1:          192.168.64.20
  IPv6 address for enp0s1:          fd3c:28b:5cc5:4064:5054:ff:fe87:5be

NeuVector Setup – minikube

minikube is already started on the new instance; however, I am going to bump up CPUs and Memory for it.

If needing to install minikube, check out the documentation.

First, stop minikube:

Run:

minikube stop

Example Output:

ubuntu@demo:~$ minikube stop
✋  Stopping node "minikube"  ...
🛑  Powering off "minikube" via SSH ...
🛑  1 node stopped.

Update CPUs:

Run:

minikube config set cpus 4

Example Output:

ubuntu@demo:~$ minikube config set cpus 4
❗  These changes will take effect upon a minikube delete and then a minikube start

Update Memory:

Run:

minikube config set memory 8192

Example Output:

ubuntu@demo:~$ minikube config set memory 8192
❗  These changes will take effect upon a minikube delete and then a minikube start

Delete exiting minikube:

In order for the configuration changes to be made, minikube needs to be deleted and recreated.

Run:

minikube delete

Example Output:

ubuntu@demo:~$ minikube delete
🔥 Deleting "minikube" in docker ...
🔥 Deleting container "minikube" ...
🔥 Removing /home/ubuntu/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.

Run:

minikube start

Example Output:

ubuntu@demo:~$ minikube start
😄  minikube v1.32.0 on Ubuntu 22.04 (arm64)
✨  Automatically selected the docker driver. Other choices: ssh, none
📌  Using Docker driver with root privileges
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=4, Memory=8192MB) ...
🐳  Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

minikube should be up running, once connected. Check its status by doing the following.

Run:

minikube status

Example Output:

ubuntu@demo:~$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

If looking to play with minikube more, there are additional add-ons which can be installed, in this case, we will leave the defaults, but metrics-server and dashboard are typical.

NeuVector Setup – kubectl

This image also comes with kubectl setup:

ubuntu@demo:~$ kubectl version
Client Version: v1.28.7
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.3

NeuVector Setup – helm

Helm is not installed, but can be quickly set up:

ubuntu@demo:~$ helm version
Command 'helm' not found, but can be installed with:
sudo snap install helm

To install, run:

sudo snap install helm --classic

Example Output:

ubuntu@demo:~$ sudo snap install helm --classic
Download snap "core22" (1125) from channel "stable"

Once install is complete, you can check the version with helm version:

ubuntu@demo:~$ helm version
version.BuildInfo{Version:"v3.14.2", GitCommit:"c309b6f0ff63856811846ce18f3bdc93d2b4d54b", GitTreeState:"clean", GoVersion:"go1.21.7"}

NeuVector Setup – Helm Install

Add the helm repo, run:

helm repo add neuvector https://neuvector.github.io/neuvector-helm/

For this, I’m going to use the latest version, but other older versions and development version can be listed:

helm search repo neuvector --devel -l

When this was originally written, the latest as of 29 February 2024 — Leap Day!:

ubuntu@demo:~$ helm search repo neuvector
NAME                CHART VERSION   APP VERSION DESCRIPTION
neuvector/core      2.7.3           5.3.0       Helm chart for NeuVector's core services
neuvector/crd       2.7.3           5.3.0       Helm chart for NeuVector's CRD services
neuvector/monitor   2.7.3           5.3.0       Helm chart for NeuVector monitor services

Helm Install: 

For setting up NeuVector, it is simple enough that I will keep most of the default values. I am updating the controller and scanner replicas, if leaving the defaults it will nuke your system since minikube is running a single node. This is fine for local and development environments, run the following:

helm upgrade --install neuvector neuvector/core --version 2.7.6 \
--set tag=5.3.2 \
--set controller.replicas=1 \
--set cve.scanner.replicas=1 \
--create-namespace \
--namespace neuvector

The readme for the repository will provide additional configuration options:

LINK : NeuVector Helm Chart

When running:

ubuntu@demo:~$ helm upgrade --install neuvector neuvector/core --version 2.7.6 \
--set tag=5.3.2 \
--set controller.replicas=1 \
--set cve.scanner.replicas=1 \
--create-namespace \
--namespace neuvector
Release "neuvector" does not exist. Installing it now.
NAME: neuvector
LAST DEPLOYED: Thu Feb 29 09:34:30 2024
NAMESPACE: neuvector
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Get the NeuVector URL by running these commands:
  NODE_PORT=$(kubectl get --namespace neuvector -o jsonpath="{.spec.ports[0].nodePort}" services neuvector-service-webui)
  NODE_IP=$(kubectl get nodes --namespace neuvector -o jsonpath="{.items[0].status.addresses[0].address}")
  echo https://$NODE_IP:$NODE_PORT

After running helm to install NeuVector, it will take some time for all of the nodes to come up and be stable. When all the pods are up and running and stable, then we should be good to try connecting.

Run:

kubectl get pods -n neuvector

Example Output:

NAME                                        READY   STATUS    RESTARTS      AGE
neuvector-controller-pod-554d868cbd-4sk54   1/1     Running   0             3m15s
neuvector-enforcer-pod-gqhsv                1/1     Running   2 (63s ago)   3m15s
neuvector-manager-pod-8589675984-7pl2j      1/1     Running   0             3m15s
neuvector-scanner-pod-5bb668cc99-r7vkq      1/1     Running   0             3m15

Accessing the NeuVector User Interface

I am going to port-forward this and access it from my local browser. On the virtual machine, run the following command.

kubectl port-forward --address 0.0.0.0 --namespace neuvector service/neuvector-service-webui 8443

Example Output:

ubuntu@demo:~$ kubectl port-forward --address 0.0.0.0 --namespace neuvector service/neuvector-service-webui 8443
Forwarding from 0.0.0.0:8443 -> 8443

This will listen on port 8443 on all addresses (0.0.0.0) and forward to the service : neuvector-service-webui.

Accessing Locally

On you local browser, go to the following, https://ipaddress:8443.

Please Note: the IP Address I pulled is the virtual machine’s private IP address. This can be checked again using multipass list.

multipass list

Example Output:

Name                    State             IPv4             Image
demo                    Running           192.168.64.20    Ubuntu 24.04 LTS
                                          172.17.0.1
                                          192.168.49.1

Since this is a self-signed certificate, you can ignore the warnings and proceed.

By default, username and password are admin:admin. 

Check off on the EULA and you can login. 

And voila, update admin password if you plan will continue to use this and you are done.

Additional Steps – Set up mysql container

If looking to test NeuVector a bit more, we will add a MySQL service and run scans on containers and nodes with the NeuVector console.

Add the bitnami repo:

helm repo add bitnami https://charts.bitnami.com/bitnami

Install:

helm install bitnami/mysql --generate-name

In NeuVector Interface

Go to Assets in the navigation pane on the left and select the dropdown. From the dropdown, select containers.

Turn on Auto Scan or perform a manual scan:

Auto Scanning:

Scans will schedule and return back results on completed. Depending on the amount of resources, both scanners and containers, it could take time. Since this is a new cluster, it is relatively quick.

You can filter and view the vulnerabilities which are found:

Go to the Nodes page:

You can see the nodes are also scanned as well for vulnerabilities.

Conclusion

That is about it, a quick and easy way to test out NeuVector. This is really just scratching the surface when it comes to what features and solutions it offers.

Award-Winning Cloud Consulting, Training & Enablement Provider Lures Industry Leaders

Pittsburgh, PA – April 16, 2024 (Newswire) – River Point Technology (RPT), an award-winning cloud consulting, training, and enablement provider, today formally announced the addition of two industry leaders to its management team to support continued high growth. RPT has named Dane Smith, Managing Director of Global Client Engagement, and Steve Pantol, VP Service Delivery, bringing their years of experience to bolster an existing high-end team.

Welcome Dane Smith, Managing Director of Global Client Engagement

Dane Smith brings over thirty years of experience building and leading sales organizations from Sun Microsystems to VMware. He has achieved success in the startup world where he has been a founding member, board advisor, and investor and had the good fortune to be a part of multiple exits. Most recently Dane helped lead computer science and data science innovation, entrepreneurship, and startups at the University of Chicago’s Polsky Center. Dane’s comprehensive experience will bring a strong focus to RPT in growing the intellectual property portfolio and RPT’s value proposition to its global F1000 customers and partners.

Welcome Steve Pantol, VP Service Delivery

Steve Pantol joins RPT as a leader with a track record of building and scaling services organizations. Steve led the development of the Cloud Services team at a large solutions integrator and more recently supported scaling the cloud native consulting group at VMware that became VMware Tanzu Labs following VMware’s acquisition of Pivotal. The successes of executing on these high growth roles will bring critical experience to RPT to support our clients’ needs as they progress through their digital transformation journeys.

Jeff Eiben, CEO of RPT, stated, “I couldn’t be more excited to bring the level of talent that Dane and Steve possess to RPT. Their industry knowledge will bring immediate value to our clients, partners, and team. My main criteria in adding executive talent to RPT was for leaders that have had a demonstrated record of accomplishment and can hit the ground running in support of our company goals. With these additions to our high-end team, strong intellectual capital and a F1000 client base of household names, we can continue to be laser focused on successful customer outcomes.” 

River Point Technology’s award-winning team, comprised of some of the world’s best IT, cloud, and DevOps experts, delivers a comprehensive suite of consulting offerings, including:

Through its 5-star rated training programs on leading cloud platforms, RPT equips teams with the necessary skills to excel in the cloud. Additionally, the company’s flagship offering, the RPT Accelerator, is a subscription-based enablement program that helps enterprises achieve Day 2 success in the cloud, ensuring ongoing optimization and value realization. 

With its unparalleled expertise and dedication to customer success, RPT is poised to continue leading the way in cloud consulting and enablement. By empowering organizations to leverage the cloud effectively, RPT helps them achieve their full potential and accelerate their digital transformation journeys. 

About River Point Technology: River Point Technology (RPT) is an award-winning cloud consulting, training, and enablement provider, partnering with the Fortune 500 to accelerate their digital transformation and infrastructure automation journeys and redefine the art of the possible. Our world-class team of IT, cloud, and DevOps experts helps organizations leverage the cloud for transformative growth through prescriptive methodologies, best- in-class services, and our trademarked Value Creation Technology process. From consulting and training to comprehensive year-long RPT Accelerator programs, River Point Technology empowers enterprises to achieve Day 2 success in the cloud and maximize their technology investments.

No matter what industry you’re in, cyberattacks and data breaches are a daily threat. That’s why it’s so vital for DevOps and DevSecOps teams to protect sensitive data and secure access to business-critical resources. As far too many a corporate victim has learned, traditional security methods often aren’t enough. They leave organizations vulnerable to breaches, hinder agility, and place unnecessary burdens on IT teams. 

This is where the power of using HashiCorp’s Vault and Boundary in concert with each other emerges, providing a sophisticated security and access management solution that goes beyond cost savings. Together, they enable organizations to greatly improve security, efficiency, and enhance the user experience. However, seamlessly integrating these two powerful tools requires careful planning and consideration. That’s why we’ve pulled together some expert tips to assist with the journey.

  1. Understanding the Individual Roles: Understanding the distinct functions of Vault and Boundary is fundamental for effective integration.
  2. Planning and Design: Defining clear goals, user roles, and access controls is vital for a secure and efficient configuration.
  3. Authentication and Authorization: Determining the appropriate methods for user authentication and authorization ensures secure access to resources.
  4. Secrets Management: Establishing robust secret lifecycle management practices is crucial for protecting sensitive information.
  5. Session Management: Configuring secure session management settings is essential for controlling access duration and privileges.
  6. Monitoring and Auditing: Implementing comprehensive monitoring and auditing capabilities aids in maintaining visibility and responding to potential threats.
  7. Best Practices and Tips: Exploring additional recommendations for optimizing the integration and ensuring long-term success.

Vault: Provides organizations with identity-based security to automatically authenticate and authorize access to secrets and other sensitive data. It offers a centralized platform for storing, managing, and accessing secrets like passwords, API keys, and certificates. Vault enforces access control through granular policies, ensuring users only have the specific permissions they need for their tasks.

Boundary: Think of Boundary as a vigilant gatekeeper, meticulously controlling access to resources based on pre-defined policies and user identities. It acts as a session management layer, facilitating secure connections between users and target resources like databases, applications, and servers. Boundary leverages Vault for dynamic credential generation and access control enforcement, ensuring users only possess the necessary credentials for the duration of their session. Built for cloud-native environments, modern privileged access management from HashiCorp Boundary uses identity-driven controls to secure user access across dynamic environments.

By understanding these distinct roles, we can begin to visualize how these two tools can work together to create a secure and efficient access management solution.

Before starting the integration journey, planning and design is key. Here are key aspects to consider:

With a clear understanding of user roles and access needs, you can configure secure authentication and authorization mechanisms. Key elements to consider:

Securely managing secrets is critical for protecting sensitive information and preventing unauthorized access. Key considerations for integrating Vault and Boundary:

Controlling user sessions is crucial for maintaining a secure environment. When integrating Vault and Boundary, keep these things in mind:

Implementing comprehensive monitoring and auditing capabilities is essential for maintaining visibility into user activity and identifying potential threats. When Boundary and Vault work together, these things are to be considered:

When integrating Vault and Boundary there are many elements that need to be considered and prioritized for the journey. HashiCorp Terraform can be used for efficient configuration management and infrastructure deployment.  Consider configuring Vault and Boundary for high availability to ensure resilience and minimize downtime.

To get the most out of your investment with custom tips and strategy, reach out to RPT. We provide a tailored 360-degree approach that addresses your specific environment and requirements. Once our team of experts meticulously analyzes all pertinent information and carefully considers every relevant aspect, we are ready to craft exciting and innovative solutions tailored to your unique needs and circumstances.

By following these essential considerations and best practices you can successfully integrate HashiCorp Vault and Boundary to better protect your organization against external and internal threats. The result can be a more secure and efficient access management ecosystem that empowers your organization to thrive in the ever-changing digital landscape. Remember, security is a continuous journey, not a destination. As technology evolves, so do the techniques used by hackers and cybercriminals. That’s why it’s imperative for DevOps and DevSecOps teams to regularly review and update their organization’s security practices to stay ahead of the threat landscape. 

For more tips on how to maximize your investment in Vault and Boundary, read this.

Need help maximizing the benefits of using Vault & Boundary? Contact the experts at RPT. As HashiCorp’s 2023 Global Competency of the Year and the only HashiCorp partner with all 3 certifications (Security, Infrastructure, & Networking), you know you’re working the leading HashiCorp services partner. Contact [email protected] today.