Showing posts with label misconfiguration. Show all posts
Showing posts with label misconfiguration. Show all posts

Daily Tech Digest - December 17, 2024

Together For Good: How Humans And AI Can Close The Health Gap

While the potential is immense, AI’s effectiveness in closing the health gap hinges on more than just technological advancement. AI must be deliberately tailored, trained, tested, and targeted to bring out the best in and for people and the planet. This means anchoring AI development and deployment in a holistic understanding of humans, and the environment they evolve in. It also entails the design of ethical frameworks, transdisciplinary collaboration, and 360-degree strategies that systematically bring out the complementarity of AI and NI, including the knowledge, experience, and intuition of humans. ... Closing the gap of preventable health inequalities cannot be achieved by advanced algorithms alone. It requires us to integrate the strengths of artificial intelligence with natural intelligence — the knowledge, ethical judgment, empathy, and cultural understanding of human beings — to ensure that solutions are both effective and just. By anchoring AI in localized insight and human expertise, we can align personal health improvements (micro) with community-led action (meso), informed national policies (macro), and globally coordinated strategies (meta), delivering equitable outcomes in every arena of the organically evolving kaleidoscope that we are part of.


How to Take a Security-First Approach to AI Implementation

Whether it's a third-party tool or an in-house project, thorough research and a clear plan will go a long way toward reducing risks. When developing guidelines for AI implementation, the first step is to match the business case with available tools, remembering that some models are more suited to specific tasks than others. Practicing a Secure by Design strategy from the ground up can future-proof AI implementation. These principles ensure that security is prioritized throughout the entire lifecycle of an AI product. A Secure by Design methodology implements multiple layers of defense against cyberthreats. During the planning stage, the security team's input is critical for a Secure by Design approach. Vendor trust is also vital. Evaluating vendors for trustworthiness and auditing contracts thoroughly, including regular monitoring of updates to vendor terms and conditions, are imperative. It is essential for data quality to be assessed for metrics like accuracy, relevance, and completeness.... Keeping security at the forefront from the get-go confers advantages, especially as tools and risks evolve. Safer AI is on the horizon as more users adhere to best practices through regulatory frameworks, international collaborations, and security-first use cases. 


Data Governance in DevOps: Ensuring Compliance in the AI Era

Implementing effective CI/CD pipeline governance in the age of AI requires a multifaceted approach. It starts with establishing clear policies outlining compliance requirements, security standards, and ethical guidelines for AI development. These policies should be embedded into the pipeline through automated checks and gates. Leveraging advanced automation tools for continuous compliance checking throughout the pipeline is essential. These tools can scan code for vulnerabilities, check for adherence to coding standards, and even analyze AI models for potential biases or unexpected behaviors. Robust version control and change management processes are also crucial components of pipeline governance. They ensure that every change to the codebase or AI model is tracked, reviewed, and approved before progressing through the pipeline. We can't forget logging and auditing. Comprehensive logging and monitoring of all pipeline activities provide the necessary audit trails for compliance demonstration and post-incident analysis. In the context of AI, this extends to monitoring deployed models for performance drift or unexpected behaviors, ensuring ongoing compliance post-deployment. 


Top 10 Cloud Data Center Stories of 2024

If you work in the data center industry, you may use term on-premise (or on-prem) frequently. But have you ever stopped to wonder how the phrase entered the data center lexicon – or considered why on-premise doesn’t make grammatical sense? In a nutshell, the answer is that it should be on-premises – note the s on the end – because premise and premises are different words. If not, you’ll be enlightened by our coverage of the history of the term on-prem and why it has long irked certain CIOs. ... The more complex your cloud architecture becomes, the harder it is to identify security risks and other misconfigurations. That’s why the ability to automate security assessments is growing increasingly important. But how good are the solutions that cloud providers offer for this purpose? To find out, we took a close look at compliance reporting tools from Azure and GCP. The takeaway was that these solutions can automate much of the work necessary to identify misconfigurations that could trigger compliance violations, but they’re no substitute for human experts. ... What was less often discussed – but equally important – is the role of edge infrastructure in AI. That’s what we focused on in our report about edge AI, meaning AI workloads that run at the network edge instead of in traditional cloud data centers.


Clop Ransomware Takes Responsibility for Cleo Mass Exploits

Whether or not Clop is actually responsible for attacks targeting various types of Cleo's MFT software couldn't be confirmed. Separately, on Dec. 10, British cybersecurity expert Kevin Beaumont reported having evidence that the ransomware group Termite possessed a zero-day exploit for vulnerabilities in the Cleo products. Security experts said both groups may well have been involved, either separately or together. "Although Cl0p posted a message on their website, this is not hard evidence pointing to a single threat group's involvement. Therefore, any discussion of whether Termite or Cl0p are behind this exploit is speculation until proven with other indicators/evidence," said Christiaan Beek, senior director of threat analytics at cybersecurity firm Rapid7. "We have seen Cl0p utilize complex chains similar to this vulnerability in multiple file transfer use cases before, such as MOVEit and Accellion FTA in 2021," Beek added.  ... The latest attacks appear to target in part CVE-2024-50623, an unrestricted file upload vulnerability in the managed file transfer products Cleo Harmony, VLTrader and LexiCom. Exploiting the vulnerability enables attackers to remotely execute code with escalated privileges.


Balancing security and user experience to improve fraud prevention strategies

There may not be one right way of handling the balance of security and user-friendly customer experience. Different institutions and their customers will have different needs, and processes might vary somewhat. But overall, there should be clear, easy-to-follow standards and checkpoints built into whatever financial institutions do. For instance, some banks or credit card companies may allow customers to institute their own stop gap for purchases over a certain amount, which may reduce the incentive for relatively large-scale fraud. These companies could also introduce some level of personalization into the processes, like how a credit or debit card could be easily turned on and turned off by customers themselves via an app or site. ... Meanwhile, it seems like barely a day goes by when there’s not some coverage of fraud or a release of personal info via hacking from some corporation, and some speculate increasingly advanced technology may make it easier for those who want to perpetrate fraud. With this in mind, there may be a greater emphasis placed on enhancing security and experimentation in what different institutions do to find what works best and to have a process in place that allows customers to have confidence in their banks and credit card companies.


Generative AI Is Just the Beginning — Here’s Why Autonomous AI is Next

Embracing this technology will unlock significant opportunities to improve organizational efficiency and accuracy. But before we dive into this, let us start with some definitions. Autonomous AI refers to systems that can perform tasks without human intervention. In contrast, generative AI systems focus on content creation based on existing data. What sets autonomous AI apart is its ability to self-manage. Understanding this difference is crucial, enabling organizations to use AI for more complex operations like predictive maintenance and resource optimization. ... The first step in successfully integrating autonomous AI into your organization is implementing robust data governance frameworks to support these advanced systems. Establish clear data privacy and transparency guidelines to ensure autonomous AI operates within ethical boundaries. It’s crucial to incorporate technical controls that prevent the AI from making reckless decisions, aligning its actions with your organizational values. ... When exploring the future of autonomous AI within your organization, it’s crucial to monitor and evaluate your autonomous AI systems regularly. Continuous assessment allows you to understand how the AI is performing and identify potential improvement areas.


Privacy by design approach drives business success in today’s digital age

Businesses that adhere to data privacy practices validate the upkeep of customer data and data privacy, earning them a stronger brand reputation. They should also ensure privacy is embedded in the organisation’s framework across the technology, products, and services, which is also known as Privacy by Design (PbD). ... The PbD framework was developed by Dr. Ann Cavoukian, Information & Privacy Commissioner of Ontario jointly with the Dutch Data Protection Authority and the Netherlands Organisation for Applied Scientific Research in 1995. It aimed to cultivate and embed privacy defences to safeguard data in the design process of a product, service, or system. Privacy becomes the default setting built at the very beginning rather than an afterthought. This framework is founded on seven core principles: being proactive and not reactive, having privacy as the default setting, having privacy embedded into design, full functionality, end-to-end security, visibility and transparency, and respect for user privacy. ... The PbD approach which is proactive indicates the company’s commitment to protecting the customer’s sensitive personal information. PbD enables companies to have personalised engagement with customers while respecting their privacy preferences.


Top 10 cybersecurity misconfigurations: Nail the setup to avoid attacks

Despite the industry-wide buzz about things like zero-trust, which is rooted in concepts such as least-privileged access control, this weakness still runs rampant. CISA’s publication calls out excessive account privileges, elevated service accounts, and non-essential use of elevated accounts. Anyone who has worked in IT or cyber for some time knows that many of these issues can be traced back to human behavior and the general demands of working in complex environments. ... Another fundamental security control that makes an appearance is the need to segment networks, a practice again that ties to the broader push for zero trust. By failing to segment networks, organizations are failing to establish security boundaries between different systems, environments, and data types. This allows malicious actors to compromise a single system and move freely across systems without encountering friction and additional security controls and boundaries that could impede their nefarious activities. The publication specifically calls out challenges where there is a lack of segmentation between IT and OT networks, putting OT networks at risk, which have real-world implications around security and safety in environments such as industrial control systems.


Why Indian enterprises are betting big on hybrid multi-cloud strategies?

The multi-cloud strategy in India is deeply intertwined with the country’s broader digital transformation initiatives. The Government of India’s Digital India program and initiatives like the National Cloud Initiatives are providing a robust framework for cloud adoption. ... The importance of edge computing is growing, and the rollout of 5G is opening up new possibilities for distributed cloud architectures. Telecom titans like Jio and Airtel are investing substantially in cloud-native infrastructure, creating ripple effects throughout industries. On the other hand, startup ecosystems play a crucial role too. Bangalore, often called the Silicon Valley of India, has become a hotbed for cloud-native technologies. Companies and numerous cloud consulting firms are developing cutting-edge multi-cloud solutions that are gaining global recognition. Foreign investments are pouring in. Major cloud providers like AWS, Microsoft Azure, and Google Cloud are expanding their infrastructure in India, with dedicated data centers that meet local compliance requirements. This local presence is critical for enterprises concerned about data sovereignty and latency.



Quote for the day:

"You aren’t going to find anybody that’s going to be successful without making a sacrifice and without perseverance." -- Lou Holtz

Daily Tech Digest - October 03, 2024

Why Staging Is a Bottleneck for Microservice Testing

Multiple teams often wait for their turn to test features in staging. This creates bottlenecks. The pressure on teams to share resources can severely delay releases, as they fight for access to the staging environment. Developers who attempt to spin up the entire stack on their local machines for testing run into similar issues. As distributed systems engineer Cindy Sridharan notes, “I now believe trying to spin up the full stack on developer laptops is fundamentally the wrong mindset to begin with, be it at startups or at bigger companies.” The complexities of microservices make it impractical to replicate entire environments locally, just as it’s difficult to maintain shared staging environments at scale. ... From a release process perspective, the delays caused by a fragile staging environment lead to slower shipping of features and patches. When teams spend more time fixing staging issues than building new features, product development slows down. In fast-moving industries, this can be a major competitive disadvantage. If your release process is painful, you ship less often, and the cost of mistakes in production is higher. 


Misconfiguration Madness: Thwarting Common Vulnerabilities in the Financial Sector

Financial institutions require legions of skilled security personnel in order to overcome the many challenges facing their industry. Developers are an especially important part of that elite cadre of defenders for a variety of reasons. First and foremost, security-aware developers can write secure code for new applications, which can thwart attackers by denying them a foothold in the first place. If there are no vulnerabilities to exploit, an attacker won't be able to operate, at least not very easily. Developers with the right training can also help to support both modern and legacy applications by examining the existing code that makes up some of the primary vectors used to attack financial institutions. That includes cloud misconfigurations, lax API security, and the many legacy bugs found in applications written in COBOL and other aging computer languages. However, the task of nurturing and maintaining security-aware developers in the financial sector won’t happen on its own. It requires precise, immersive training programs that are highly customizable and matched to the specific complex environment that a financial services institution is using.


3 things to get right with data management for gen AI projects

The first is a series of processes — collecting, filtering, and categorizing data — that may take several months for KM or RAG models. Structured data is relatively easy, but the unstructured data, while much more difficult to categorize, is the most valuable. “You need to know what the data is, because it’s only after you define it and put it in a taxonomy that you can do anything with it,” says Shannon. ...  “We started with generic AI usage guidelines, just to make sure we had some guardrails around our experiments,” she says. “We’ve been doing data governance for a long time, but when you start talking about automated data pipelines, it quickly becomes clear you need to rethink the older models of data governance that were built more around structured data.” Compliance is another important area of focus. As a global enterprise thinking about scaling some of their AI projects, Harvard keeps an eye on evolving regulatory environments in different parts of the world. It has an active working group dedicated to following and understanding the EU AI Act, and before their use cases go into production, they run through a process to make sure all compliance obligations are satisfied.


Fundamentals of Data Preparation

Data preparation is intended to improve the quality of the information that ML and other information systems use as the foundation of their analyses and predictions. Higher-quality data leads to greater accuracy in the analyses the systems generate in support of business decision-makers. This is the textbook explanation of the link between data preparation and business outcomes, but in practice, the connection is less linear. ... Careful data preparation adds value to the data itself, as well as to the information systems that rely on the data. It goes beyond checking for accuracy and relevance and removing errors and extraneous elements. The data-prep stage gives organizations the opportunity to supplement the information by adding geolocation, sentiment analysis, topic modeling, and other aspects. Building an effective data preparation pipeline begins long before any data has been collected. As with most projects, the preparation starts at the end: identifying the organization’s goals and objectives, and determining the data and tools required to achieve those goals. ... Appropriate data preparation is the key to the successful development and implementation of AI systems in large part because AI amplifies existing data quality problems. 


How to Rein in Cybersecurity Tool Sprawl

Security tool sprawl happens for many different reasons. Adding new tools and new vendors as new problems arise without evaluating the tools already in place is often how sprawl starts. The sheer glut of tools available in the market can make it easy for security teams to embrace the latest and greatest solutions. “[CISOs] look for the newest, the latest and the greatest. They're the first adopter type,” says Reiter. A lack of communication between departments and teams in an enterprise can also contribute. “There's the challenge of teams not necessarily knowing their day-to-day functions of other team,” says Mar-Tang. Security leaders can start to wrap their heads around the problem of sprawl by running an audit of the security tools in place. Which teams use which tools? How often are the tools used? How many vendors supply those tools? What are the lengths of the vendor contracts? Breaking down communication barriers within an enterprise will be a necessary part of answering questions like these. “Talk to the … security and IT risk side of your house, the people who clean up the mess. You have an advocate and a partner to be able to find out where you have holes and where you have sprawl,” Kris Bondi, CEO and co-founder at endpoint security company Mimoto, recommends.


The Promise and Perils of Generative AI in Software Testing

The journey from human automation tester to AI test automation engineer is transformative. Traditionally, transitioning to test automation required significant time and resources, including learning to code and understanding automation frameworks. AI removes these barriers and accelerates development cycles, dramatically reducing time-to-market and improving accuracy, all while decreasing the level of admin tasks for software testers. AI-powered tools can interpret test scenarios written in plain language, automatically generate the necessary code for test automation, and execute tests across various platforms and languages. This dramatically reduces the enablement time, allowing QA professionals to focus on strategic tasks instead of coding complexities. ... As GenAI becomes increasingly integrated into software development life cycles, understanding its capabilities and limitations is paramount. By effectively managing these dynamics, development teams can leverage GenAI’s potential to enhance their testing practices while ensuring the integrity of their software products.


Near-'perfctl' Fileless Malware Targets Millions of Linux Servers

The malware looks for vulnerabilities and misconfigurations to exploit in order to gain initial access. To date, Aqua Nautilus reports, the malware has likely targeted millions of Linux servers, and compromised thousands. Any Linux server connected to the Internet is in its sights, so any server that hasn't already encountered perfctl is at risk. ... By tracking its infections, researchers identified three Web servers belonging to the threat actor: two that were previously compromised in prior attacks, and a third likely set up and owned by the threat actor. One of the compromised servers was used as the primary base for malware deployment. ... To further hide its presence and malicious activities from security software and researcher scrutiny, it deploys a few Linux utilities repurposed into user-level rootkits, as well as one kernel-level rootkit. The kernel rootkit is especially powerful, hooking into various system functions to modify their functionality, effectively manipulating network traffic, undermining Pluggable Authentication Modules (PAM), establishing persistence even after primary payloads are detected and removed, or stealthily exfiltrating data. 


Three hard truths hindering cloud-native detection and response

Most SOC teams either lack the proper tooling or have so many cloud security point tools that the management burden is untenable. Cloud attacks happen way too fast for SOC teams to flip from one dashboard to another to determine if an application anomaly has implications at the infrastructure level. Given the interconnectedness of cloud environments and the accelerated pace at which cloud attacks unfold, if SOC teams can’t see everything in one place, they’ll never be able to connect the dots in time to respond. More importantly, because everything in the cloud happens at warp speed, we humans need to act faster, which can be nerve wracking and increase the chance of accidentally breaking something. While the latter is a legitimate concern, if we want to stay ahead of our adversaries, we need to get comfortable with the accelerated pace of the cloud. While there are no quick fixes to these problems, the situation is far from hopeless. Cloud security teams are getting smarter and more experienced, and cloud security toolsets are maturing in lockstep with cloud adoption. And I, like many in the security community, am optimistic that AI can help deal with some of these challenges.


How to Fight ‘Technostress’ at Work

Digital stressors don’t occur in isolation, according to the researchers, which necessitates a multifaceted approach. “To address the problem, you can’t just address the overload and invasion,” Thatcher said. “You have to be more strategic.” “Let’s say I’m a manager, and I implement a policy that says no email on weekends because everybody’s stressed out,” Thatcher said. “But everyone stays stressed out. That’s because I may have gotten rid of techno-invasion—that feeling that work is intruding on my life—but on Monday, when I open my email, I still feel really overloaded because there are 400 emails.” It’s crucial for managers to assess the various digital stressors affecting their employees and then target them as a combination, according to the researchers. That means to address the above problem, Thatcher said, “you can’t just address invasion. You can’t just address overload. You have to address them together,” he said. ... Another tool for managers is empowering employees, according to the study. “As a manager, it may feel really dangerous to say, ‘You can structure when and where and how you do work.’ 


Fix for BGP routing insecurity ‘plagued by software vulnerabilities’ of its own, researchers find

Under BGP, there is no way to authenticate routing changes. The arrival of RPIK just over a decade ago was intended to fix that, using a digital record called a Route Origin Authorization (ROA) that identifies an ISP as having authority over specific IP infrastructure. Route origin validation (ROV) is the process a router undergoes to check that an advertised route is authorized by the correct ROA certificate. In principle, this makes it impossible for a rogue router to maliciously claim a route it does not have any right to. RPKI is the public key infrastructure that glues this all together, security-wise. The catch is that, for this system to work, RPIK needs a lot more ISPs to adopt it, something which until recently has happened only very slowly. ... “Since all popular RPKI software implementations are open source and accept code contributions by the community, the threat of intentional backdoors is substantial in the context of RPKI,” they explained. A software supply chain that creates such vital software enabling internet routing should be subject to a greater degree of testing and validation, they argue.



Quote for the day:

"You may have to fight a battle more than once to win it." -- Margaret Thatcher

Daily Tech Digest - January 27, 2024

The future of biometrics in a zero trust world

Nearly one in three CEOs and members of senior management have fallen victim to phishing scams, either by clicking on the same link or sending money. C-level executives are the primary targets for biometric and deep fake attacks because they are four times more likely to be victims of phishing than other employees, according to Ivanti’s State of Security Preparedness 2023 Report. Ivanti found that whale phishing is the latest digital epidemic to attack the C-suite of thousands of companies. ... In response to the increasing need for better biometric security globally, Badge Inc. recently announced the availability of its patented authentication technology that renders personal identity information (PII) and biometric credential storage obsolete. Badge also announced an alliance with Okta, the latest in a series of partnerships aimed at strengthening Identity and Access Management (IAM) for their shared enterprise customers. Srivastava explained how her company’s approach to biometrics eliminates the need for passwords, device redirects, and knowledge-based authentication (KBA). Badge supports an enroll once and authenticate on any device workflow that scales across an enterprise’s many threat surfaces and devices. 


Understanding CQRS Architecture

CRUD and CQRS are both tactical patterns, concentrating on the implementation specifics at the level of individual services. Therefore, asserting that an organization relies entirely on a CQRS architecture may not be entirely accurate. While certain services may adopt this architecture, it is typical for other services to employ simpler paradigms. The entire organization may not adhere to a unified style for all problems. The CRUD architecture assumes the existence of a single model for both read and update operations. CRUD operations are typically linked with traditional relational database systems, and numerous applications adopt a CRUD-based approach for data management. Conversely, the CQRS architecture assumes the presence of distinct models for queries and commands. While this paradigm is more intricate to implement and introduces certain subtleties, it provides the advantage of enabling stricter enforcement of data validation, implementation of robust security measures, and optimization of performance. These definitions may appear somewhat vague and abstract at the moment, but clarity will emerge as we delve into the details. It's important to note here that CQRS or CRUD should not be regarded as an overarching philosophy to be blindly applied in all circumstances. 


Role of Wazuh in building a robust cybersecurity architecture

Wazuh is a free and open source security solution that offers unified XDR and SIEM protection across several platforms. Wazuh protects workloads across virtualized, on-premises, cloud-based, and containerized environments to provide organizations with an effective approach to cybersecurity. By collecting data from multiple sources and correlating it in real-time, it offers a broader view of an organization's security posture. Wazuh plays a significant role in implementing a cyber security architecture, providing a platform for security information and event management, active response, compliance monitoring, and more. It provides flexibility and interoperability, enabling organizations to deploy Wazuh agents across diverse operating systems. Wazuh is equipped with a File Integrity Monitoring (FIM) module that helps detect file changes on monitored endpoints. It takes this a step further by combining the FIM module with threat detection rules and threat intelligence sources to detect malicious files allowing security analysts to stay ahead of the threat curve. Wazuh also provides out-of-the-box support for compliance frameworks like PCI DSS, HIPAA, GDPR, NIST SP 800-53, and TSC. 


Budget cuts loom for data privacy initiatives

In addition to difficulty understanding the privacy regulatory landscape, organizations also face other data privacy challenges, including budget. 43% of respondents say their privacy budget is underfunded and only 36% say their budget is appropriately funded. When looking at the year ahead, only 24% say that they expect budget will increase (down 10 points from last year), and only one percent say it will remain the same (down 26 points from last year). 51% expect a decrease in budget, which is significantly higher than last year when only 12% expected a decrease in budget. For those seeking resources, technical privacy positions are in highest demand, with 62% of respondents indicating there will be increased demand for technical privacy roles in the next year, compared to 55% for legal/compliance roles. However, respondents indicate there are skills gaps among these privacy professionals; they cite experience with different types of technologies and/or applications (63%) as the biggest one. When looking at common privacy failures, respondents pinpointed the lack of or poor training (49%), not practicing privacy by design (44%) and data breaches (42%) as the main concerns.


How to become a Chief Information Security Officer

In general, the CISO position is well-paid. Due to high demand and a limited talent pool, top-tier CISOs have commanded salaries in excess of $2.3 million. Nonetheless, executive remuneration may vary based on industry, company size and specifics of a role. The CISO typically manages a team of cyber security experts (sometimes multiple teams) and collaborates with high-level business stakeholders to facilitate the strategic development and completion of cyber security initiatives. ... While experience in cyber security does count for a lot, and while smart and talented people do ascend to the CISO role without extensive formal schooling, it can pay to get the right education. Most enterprises will expect that a potential CISO have a bachelor’s degree in computer science (or a similar discipline). There are exceptions, but an undergraduate degree is often used as a credibility benchmark. ... When it comes to real-world experience, most CISO roles require a minimum of five years’ time spent in the industry. A potential CISO should maintain broad knowledge of a variety of platforms and solutions, along with a strong understanding of both cyber security history and modern day cyber security threats.


I thought software subscriptions were a ripoff until I did the math

Selling perpetual licenses means you get a big surge in revenue with each new release. But then you have to watch that cash pile dwindle as you work on the next version and try to convince your customers to pay for the upgrade. If you want the opportunity to continually improve your software, you need to bring in enough revenue each year to justify the time and resources you spend on the project. That's the difference between a sustainable business and a hobby. It strikes me that the real objection to software as a subscription isn't to the business model, but rather to the price. If you think a fair price for a piece of software is closer to $50 than $500, and you should be able to use it in perpetuity, you're telling the developer that you're willing to pay them no more than a few bucks a month. They're trying to tell you that's not enough to sustain a software business, and maybe you should try a free, open-source option instead. All the developers that are migrating to a cloud-based subscription model are taking a necessary step to help ensure their long-term survival. The challenge for companies playing in this space is to make it crystal clear that their subscriptions offer real value


Filling the Cybersecurity Talent Gap

Thankfully, there is a talented group in the veteran community ready and willing to meet the challenge. Through their unique skills, discipline, and unmatched experience, veterans are perfectly suited to help address the talent gap and growing cyber threats we face. Not only that, but veterans will find that IT and cybersecurity provide a second career as they transition out of their service. Veterans leave service with a wide range of talents that have several applications outside of the military. This includes both what are often called "soft skills," or those that are beneficial in a number of settings, as well as technical abilities well-suited for cybersecurity and IT. ... As the industry continues to incorporate more secure by design principles that guide how we approach security and cyber resiliency, we need a workforce that understands the importance of security and defense. To make this a reality, we need both the government and private companies to step up and create the right pathways for veterans to enter the workforce. This can include expanding the GI Bill to add additional incentives for careers in cybersecurity. Private companies should also offer more hands-on workshops and training that can both provide a way for applicants to learn and help companies fill their open positions.


How Much Architecture Is “Enough?”: Balancing the MVP and MVA Helps You Make Better Decisions

The critical challenge that the MVA must solve is that it must answer the MVP’s current challenges while anticipating but not actually solving future challenges. In other words, the MVA must not require unacceptable levels of rework to actually solve those future problems. Some rework is okay and expected, but the words "complete rewrite" mean that the architecture has failed and all bets on viability are off. As a result of this, the MVA hangs in a dynamic balance between solving future problems that may never exist, and letting technical debt pile up to the point where it leads to, metaphorically, architectural bankruptcy. Being able to balance these two forces is where experience comes in handy. ... The development team creates the initial MVA based on their initial and often incomplete understanding of the problems the MVA needs to solve. They will not usually have much in the way of QARs, perhaps only broad organizational "standards" that are more aspirational than accurate. These initial statements are often so vague as to be unhelpful, e.g. "the system must support very large numbers of concurrent users", "the system must be easy to support and maintain", "the system must be secure against external threats", etc.


Group permission misconfiguration exposes Google Kubernetes Engine clusters

The problem is that in most other systems “authenticated users” are users that the administrators created or defined in the system. This is also the case in privately self-managed Kubernetes clusters or for the most part in clusters set up on other cloud services providers such as Azure or AWS. So, it’s not hard to see how some administrators might conclude that system:authenticated refers to a group of verified users and then decide to use it as an easy method to assign some permissions to all those trusted users. “GKE, in contrast to Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS), exposes a far-reaching threat since it supports both anonymous and full OpenID Connect (OIDC) access,” the Orca researchers said. “Unlike AWS and Azure, GCP’s managed Kubernetes solution considers any validated Google account as an authenticated entity. Hence, system:authenticated in GKE becomes a sensitive asset administrators should not overlook.” The Kubernetes API can integrate with many authentication systems and since access to Google Cloud Platform and all of Google’s services in general is done through Google accounts, it makes sense to also integrate GKE with Google’s IAM and OAuth authentication and authorization system.


Will the Rise of Generative AI Increase Technical Debt?

The rise of generative AI-related tools will likely increase technical debt, both due to the rush to hastily adopt new capabilities and the need to mold AI models to suit specific requirements. “New LLMs and generative AI applications will undoubtedly increase technical debt in the future, or at a minimum, greatly increase the need to manage that debt proactively,” said Quillin. “It starts with new requirements to continually manage, maintain, and nurture these models from a broad range of new KPIs from bias, concept drift, and shifting business, consumer, and environmental inputs and goals,” he said. Incorporating AI may require a significant upfront commitment, leading to additional technical debt. “It won’t be just a build-and-maintain scenario, but rather, the first of many steps on a long road ahead,” said Prince Kohli, CTO of Automation Anywhere. Product companies with a generative AI focus must invest in creating a data and model strategy, a data architecture to work with AI, controls for the AI and more. “Technology disruptions and pivots such as this always lead to this kind of technical debt that must be continually paid down, but it’s the price of admittance,” he said.



Quote for the day:

''The best preparation for tomorrow is doing your best today.'' -- H. Jackson