Stress is a pervasive issue that affects people from all walks of life, including those in the business world. Unfortunately, the effects of stress on businesses in America can be detrimental, impacting productivity, morale, and even the bottom line.
The American Institute of Stress estimates that stress costs businesses in the United States approximately $300 billion per year in absenteeism, turnover, decreased productivity, and healthcare costs.
Stress can lead to increased absenteeism and turnover rates in businesses. Employees who are stressed may take more sick days, be less productive when they are at work, and ultimately leave the company if their stress levels become too overwhelming. This can lead to decreased productivity, decreased morale, and increased costs associated with recruitment and training of new employees.
Employees’ health can also be affected by stress, leading to increased healthcare costs for the company. Those who are stressed may be more likely to experience physical or mental health issues, such as anxiety, depression, and cardiovascular disease. This can lead to increased absenteeism and disability claims, which can increase the overall cost of healthcare for the company.
According to the National Institute for Occupational Safety and Health, 40% of workers reported their job was very or extremely stressful.
Stress can have a significant impact on businesses in America, leading to decreased productivity, increased absenteeism and turnover, negative impact on morale, and increased healthcare costs. This is why RPT takes steps to manage stress in the workplace, by helping their employees maintain their health and well-being while also improving productivity and overall company success.
This includes providing resources for stress management, promoting work-life balance, creating a positive work environment, and encouraging open communication between employees and management. Learn more about joining the RPT team here!
By, Bryan Krausen: Author, Instructor, and VP, Consulting Services at RPT
So why is secrets management so important? Regardless of the type of environment you work in, there will be privileged credentials needed by applications, users, or other software platforms to manage your environment. Secrets can be anything your organization deems confidential and could cause harm to the company if shared or exposed. Examples could be database credentials to read customer data, a private key used to decrypt communications to your app server, or domain admin creds used by your vulnerability scanner during nightly runs. Managing these privileged credentials is an essential process that is critical to an organization’s security posture.
Secrets are used EVERYWHERE in organizations. Think about the credentials that were required for the last application or deployment you participated in, regardless of how basic or complex it was. As a human user, you likely need privileged credentials to provision resources in your production environment, like gaining access to VMware vCenter to deploy virtual machines, requesting a TLS certificate for your application, or logging into Terraform Cloud to provision Amazon EC2 instances. Moving over to the application side, they need access to additional services within your organization, like an internal API, a file share, or the ability to read/write to a database server to store data. The applications might need to register the service within your service catalog (service mesh) or execute a script that traverses a proxy and pulls down packages from Artifactory. These actions all require some privileged credential or secret that needs to be managed appropriately.
So where should all these secrets live? Most organizations understand these secrets should be managed in some secret management solution. However, that doesn’t always reflect what is actually in practice. I’ve worked with countless organizations that keep credentials in an Excel sheet, a OneNote document, or even a text file on their desktop. That strategy provides absolutely no security and exposes these companies to security breaches. Other organizations have taken a step further and used a consumer-based solution, like 1Password or LastPass, to store these long-lived credentials. It’s better than nothing, but it doesn’t provide the organization with complete visibility and management of credentials. Plus, we’re talking about the practice of DevOps here, so it doesn’t offer much in terms of automated retrieval or rotation either.
Ideally, organizations need to adopt a proper secret management tool that can be used to consolidate secrets and provide features such as role-based access control, rotation and revocation, expiration, and auditing capabilities.
Let’s talk about the difference between long-lived secrets and dynamic secrets.
Not all secrets are created equal. Most organizations default to creating long-lived, static credentials that are often shared among teams and applications. Creating these credentials usually requires a long process, such as ticket creation, security approval, management approval, etc. Because obtaining credentials is often tedious, engineers and administrators will reuse or share these credentials among different applications rather than repeat this process. Be honest, how many times have you clicked this button in Active Directory? I know I have done it 100s of times in the past….
These reused and often shared credentials are hard to audit, can be impossible to rotate, and provide very little accountability. Additionally, these static credentials offer 24/7 access to the target system, even though access might only be needed for minutes per day.
In contrast with static credentials, many organizations realize the benefits of migrating to dynamically generated secrets. Rather than create the credentials beforehand, applications can request credentials on-demand when needed. The application uses dynamic credentials to access a system or platform to perform work, and the credentials are then revoked/deleted afterward. If these dynamic credentials are accidentally written to a log file or committed to a code repository, it no longer becomes a security threat because they are already invalidated. And because dynamic credentials are accessible to applications (with proper authentication, of course), each instance of an application can generate its own credential to access the backend system.
For example, let’s assume we’re using Terraform to deploy our infrastructure to our favorite public cloud platform. If you were using static credentials, you would log into the cloud platform, create static credentials (probably highly privileged ones), and provide those credentials for Terraform to provision and manage your infrastructure. Those highly privileged credentials are valid 24/7, even though you only run Terraform a few times daily. On the other hand, if you were using a dynamic credential, Terraform could first obtain a credential, provision, or manage the infrastructure, and the credential would be invalidated after. When Terraform isn’t running, there is no credential that can be exposed or misused. Even if the dynamic credential were written to logs or accidentally committed to a public GitHub repo, it wouldn’t matter since it was revoked when the job was completed or after a minimal TTL.
Access to secrets should be tightly controlled, and only authorized personnel should be able to access them. Ideally, two-factor authentication or a multi-step approval process should be in place for highly-privileged credentials, such as domain access, root credentials, or secrets used to obtain confidential data. Access should be limited to secrets based on an employee’s role within the organization or an application’s requirements to fulfill its duties.
It is important that access to secrets should be closely monitored, and a log should be maintained of all actions taken of them. Logs should be ingested into a SIEM or log correlation systems, like Splunk, SumoLogic, or DataDog, to create dashboards and alert on specific actions. This can help quickly detect and respond to potential security threats within the organization.
In a DevOps and automated world, secrets management solutions must be centered around a fully featured REST API. With such, access to the platform can be automated entirely by any orchestrator or pipeline tool the organization uses, simplifying company-wide adoption. Secrets Management tools such as HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault can provide organizations with features such as encryption at rest, role-based access control, and auditing capabilities to help protect secrets. From my experience, the most popular tools used by organizations are:
###
How safe is your cloud infrastructure? The team at River Point Technology consists of top experts who are the IT industry’s best at securing, storing and controlling secrets in the cloud. Our approach is centered around helping our clients achieve maximum value out of their technology investments. Contact us today for a security assessment.
Professional basketball—especially March Madness– has always been at the forefront of innovation and technology. It has become a high-tech industry, and the use of automation and cloud computing has become an essential part of the game.
The National Basketball Association (NBA) and its teams are always exploring new ways to gain a competitive edge. One of the most significant technological advancements in recent years has been the integration of automation and cloud computing into the sport.
Cloud computing has become a crucial part of March Madness’s infrastructure, as it allows them to store and analyze vast amounts of data, including player statistics and game footage. The best part is that they can access this data from anywhere and collaborate with other teams in real-time.
Automation has been a game-changer.
The league uses automated software to handle many operations such as scheduling games, managing ticket sales, and advertising campaigns. This automation saves time and money, making their operations more efficient and accurate.
One of the most exciting things about automation and cloud computing is how they’re used in AI (artificial intelligence) and ML (machine learning). These technologies analyze player data and game footage to identify patterns and make predictions about player performance and game outcomes. Coaches and managers use this information to develop training programs and game strategies that give their teams a competitive edge.
Fans also benefit from automation and cloud computing, as the league uses chatbots to answer their questions and provide them with the information they need. Cloud-based systems deliver streaming video and other content to fans, making it easier for them to stay connected with their favorite teams from anywhere in the world.
These technologies have enabled coaches and analysts to gain new insights into player performance and game strategy, and have led to the development of new training tools. As the use of automation and cloud computing continues to grow in basketball, we can expect to see even more innovation and advancement in the sport.
When the NBA took on new technology, to enhance the experience for fans, players, and teams. they needed to accomplish a cloud-first approach—and a cloud partner that allowed it to scale up dramatically when needed. “Being able to spin up more compute when we need it during games is crucial,” Sarachek says in an article written about the experience.
Using Cloud Data to Deliver Personalized Data On Demand
“In 2020, NBA CourtOptix was launched with a primary focus on enhancing the fan experience. The platform delivers post-game analysis that combines video with previously challenging-to-track statistics, such as identifying players who get double-teamed more frequently. But now, with the help of Microsoft Azure, the NBA can share advanced stats that enrich journalists’, teams’, and employees’ understanding of the game which can transform how the game is experienced.
Every game night, NBA teams receive a cache of data on each game, a detailed breakdown that is changing team strategies. Teams that have signed up to receive data get it after each contest, helping them make adjustments on the fly—all thanks to a seamless backend data flow created by Microsoft Azure developers. As soon as a game ends, Azure Cosmos DB is used to check metadata to ensure the system should process the matchup. Then, Azure Kubernetes Service kicks off various pipelines running on Azure Databricks, which leverages ML and AI to process information (like the aforementioned defensive metrics). After being stored on Azure Data Lake Storage, the data is automatically synced to teams’ Azure Storage Containers using Azure Data Share. This cloud-first approach helps the NBA save money by being able to scale resources up and down as needed while ensuring data is seamlessly processed and shared with teams.
“With Azure Data Share, we can go into the Azure console and invite a new team or partner to receive the data,” Sarachek says. “Once they accept the invitation, they receive updated data in their Azure environments without having to build workflows or processes to pull it in themselves.” ” (read the full article here).
Automation and cloud computing have become essential tools for the NBA and its teams. They help improve player performance, streamline their operations, enhance the fan experience, and explore new innovations. It’s exciting to think about what’s next for the NBA as they continue to embrace these technologies and take the game to the next level!