Showing posts with label licensing. Show all posts
Showing posts with label licensing. Show all posts

Daily Tech Digest - July 12, 2025


Quote for the day:

"If you do what you’ve always done, you’ll get what you’ve always gotten." -- Tony Robbins


Why the Value of CVE Mitigation Outweighs the Costs

When it comes to CVEs and continuous monitoring, meeting compliance requirements can be daunting and confusing. Compliance isn’t just achieved; rather, it is a continuous maintenance process. Compliance frameworks might require additional standards, such as Federal Information Processing Standards (FIPS), Federal Risk and Authorization Management Program (FedRAMP), Security Technical Implementation Guides (STIGs) and more that add an extra layer of complexity and time spent. The findings are clear. Telecommunications and infrastructure companies reported an average of $3 million in new revenue annually by improving their container security enough to qualify for security-sensitive contracts. Healthcare organizations averaged $7.3 million in new revenue, often driven by unlocking expansion into compliance-heavy markets. ... The industry has long championed “shifting security left,” or embedding checks earlier in the pipeline to ensure security measures are incorporated throughout the entire software development life cycle. However, as CVE fatigue worsens, many teams are realizing they need to “start left.” That means: Using hardened, minimal container images by default; Automating CVE triage and patching through reproducible builds; Investing in secure-by-default infrastructure that makes vulnerability management invisible to most developers


Generative AI: A Self-Study Roadmap

Building generative AI applications requires comfort with Python programming and basic machine learning concepts, but you don't need deep expertise in neural network architecture or advanced mathematics. Most generative AI work happens at the application layer, using APIs and frameworks rather than implementing algorithms from scratch. ... Modern generative AI development centers around foundation models accessed through APIs. This API-first approach offers several advantages: you get access to cutting-edge capabilities without managing infrastructure, you can experiment with different models quickly, and you can focus on application logic rather than model implementation. ... Generative AI applications require different API design patterns than traditional web services. Streaming responses improve user experience for long-form generation, allowing users to see content as it's generated. Async processing handles variable generation times without blocking other operations. ... While foundation models provide impressive capabilities out of the box, some applications benefit from customization to specific domains or tasks. Consider fine-tuning when you have high-quality, domain-specific data that foundation models don't handle well—specialized technical writing, industry-specific terminology, or unique output formats requiring consistent structure.


Announcing GenAI Processors: Build powerful and flexible Gemini applications

At its core, GenAI Processors treat all input and output as asynchronous streams of ProcessorParts (i.e. two-way aka bidirectional streaming). Think of it as standardized data parts (e.g., a chunk of audio, a text transcription, an image frame) flowing through your pipeline along with associated metadata. This stream-based API allows for seamless chaining and composition of different operations, from low-level data manipulation to high-level model calls. ... We anticipate a growing need for proactive LLM applications where responsiveness is critical. Even for non-streaming use cases, processing data as soon as it is available can significantly reduce latency and time to first token (TTFT), which is essential for building a good user experience. While many LLM APIs prioritize synchronous, simplified interfaces, GenAI Processors – by leveraging native Python features – offer a way for writing responsive applications without making code more complex. ... GenAI Processors is currently in its early stages, and we believe it provides a solid foundation for tackling complex workflow and orchestration challenges in AI applications. While the Google GenAI SDK is available in multiple languages, GenAI Processors currently only support Python.


Scaling the 21st-century leadership factory

Identifying priority traits is critical; just as important, CEOs and their leadership teams must engage early and often with high-potential employees and unconventional thinkers in the organization, recognizing that innovation often comes from the edges of the business. Skip-level meetings are a powerful tool for this purpose. Most famously, Apple’s Steve Jobs would gather what he deemed the 100 most influential people at the company, including young engineers, to engage directly in strategy discussions—regardless of hierarchy or seniority. ... A culture of experimentation and learning is essential for leadership development—but it must be actively pursued. “Instillation of personal initiative, aggressiveness, and risk-taking doesn’t spring forward spontaneously,” General Jim Mattis explained in his 2019 book on leadership, Call Sign Chaos. “It must be cultivated for years and inculcated, even rewarded, in an organization’s culture. If the risk-takers are punished, then you will retain in your ranks only the risk averse,” he wrote. ... There are multiple ways to streamline decision-making, including redefining decision rights to focus on a handful of owners and distinguishing between different types of decisions, as not all choices are high stakes. 


Lessons learned from Siemens’ VMware licensing dispute

Siemens threatened to sue VMware if it didn’t provide ongoing support for the software and handed over a list of the software it was using that it wanted support for. Except that the list included software that it didn’t have any licenses for, perpetual or otherwise. Broadcom-owned VMware sued, Siemens countersued, and now the companies are battling over jurisdiction. Siemens wants the case to be heard in Germany, and VMware prefers the United States. Normally, if unlicensed copies of software are discovered during an audit, the customer pays the difference and maybe an additional penalty. After all, there are always minor mistakes. The vendors try to keep these costs at least somewhat reasonable, since at some point, customers will migrate from mission-critical software if the pain is high enough. ... For large companies, it can be hard to pivot quickly. Using open-source software can help reduce the risk of unexpected license changes, and, for many major tools there are third-party service providers that can offer ongoing support. Another option is SaaS software, Ringdahl says, because it does make license management a bit easier, since there’s usually transparency both for the customer and the vendor about how much usage the product is getting.


Microsoft says regulations and environmental issues are cramping its Euro expansion

One of the things that everyone needs to consider is how datacenter development in Europe is being enabled or impeded, Walsh said. "Because we have moratoriums coming at us. We have communities that don't want us there," she claimed, referring particularly to Ireland where local opposition to bit barns has been hardening because of the amount of electricity they consume and their environmental impact. Another area of discussion at the Datacloud keynote was the commercial models for acquiring datacenter capacity, which it was felt had become unfit for the new environment where large amounts are needed quickly. "From our perspective, time to market is essential. We've done a lot of leasing in the last two years, and that is all time for market pressure," Walsh said. "I also manage land acquisition and land development, which includes permitting. So the joy of doing that is that when my permits are late, I can lease so I can actually solve my own problems, which is amazing, but the way things are going, it's going to be very difficult to continue to lease the infrastructure using co-location style funding. It's just getting too big, and it's going to get harder and harder to get up the chain, for sure," she explained. ... "European regulations and planning are very slow, and things take 18 months longer than anywhere else," she told attendees at <>Bisnow's Datacenter Investment Conference and Expo (DICE) in Ireland.


350M Cars, 1B Devices Exposed to 1-Click Bluetooth RCE

The scope of affected systems is massive. The developer, OpenSynergy, proudly boasts on its homepage that Blue SDK — and RapidLaunch SDK, which is built on top of it and therefore also possibly vulnerable — has been shipped in 350 million cars. Those cars come from companies like Mercedes-Benz, Volkswagen, and Skoda, as well as a fourth known but unnamed company. Since Ford integrated Blue SDK into its Android-based in-vehicle infotainment (IVI) systems in November, Dark Reading has reached out to determine whether it too was exposed. ... Like any Bluetooth hack, the one major hurdle in actually exploiting these vulnerabilities is physical proximity. An attacker would likely have to position themselves within around 10 meters of a target device in order to pair with it, and the device would have to comply. Because Blue SDK is merely a framework, different devices might block pairing, limit the number of pairing requests an attacker could attempt, or at least require a click to accept a pairing. This is a point of contention between the researchers and Volkswagen. ... "Usually, in modern cars, an infotainment system can be turned on without activating the ignition. For example, in the Volkswagen ID.4 and Skoda Superb, it's not necessary," he says, though the case may vary vehicle to vehicle. 


Leaders will soon be managing AI agents – these are the skills they'll need, according to experts

An AI agent is essentially just "a piece of code", says Jarah Euston, CEO and Co-Founder of AI-powered labour platform WorkWhile, which connects frontline workers to shifts. "It may not have the same understanding, empathy, awareness of the politics of your organization, of the fears or concerns or ambitions of the people around that it is serving. "So managers have to be aware that the agent is only as good as how you've trained it. I don't think we're close yet to having agents that can operate without any human oversight. "As a manager, you want to leverage the AI to make you and your team more productive, but you constantly have to be checking, iterating and training your tools to get the most out of them."  ... Technological skills are expected to become increasingly vital over the next five years, outpacing the growth of all other skill categories. Leading the way are AI and big data, followed closely by networking, cybersecurity and overall technological literacy. The so-called 'soft skills' of creative thinking and resilience, flexibility and agility are also rising in importance, along with curiosity and lifelong learning. Empathy is one skill AI agents can't learn, says Women in Tech's Moore Aoki, and she believes this will advantage women.


Common Master Data Management (MDM) Pitfalls

In addition to failing to connect MDM’s value with business outcomes, “People start with MDM by jumping in with the technology,” Cooper said. “Then, they try to fit the people, processes, and master data into their selected technology.” Moreover, in the process of prioritizing technology first, organizations take for granted that they have good data quality, data that is clean and fit for purpose. Then, during a major initiative, such as migrating to a cloud environment, they discover their data is not so clean. ... Organizations fall into the pitfalls above and others because they try to do it alone, and most have never done MDM before. Instead, “Organizations have different capabilities with MDM,” said Cooper, “and you don’t know what you don’t know.” ... Connecting the MDM program to business objectives requires talking with the stakeholders across the organization, especially divisions with direct financial risks such as sales, marketing, procurement, and supply. Cooper said readers should learn the goals of each unit and how they measure success in growing revenue, reducing cost, mitigating risk, or operating more efficiently. ... Cooper advised focusing on data quality – e.g., through reference data – rather than technology. In the figure below, a company has data about a client, Emerson Electric, as shown on the left. 


Why Cloud Native Security Is More Complex Than You Think

Enterprise security tooling can help with more than just the monitoring of these vulnerabilities though. And, often older vulnerabilities that have been patched by the software vendor will offer “fix status” advice. This is where a specific package version is shown to the developer or analyst responsible for remediating the vulnerability. When they upgrade the current package to that later version, the vulnerability alert will be resolved. To confuse things further, the applications running in containers or serverless functions also need to be checked for non-compliance. Warnings that may be presented by security tooling when these applications are checked against recognised compliance standards, frameworks or benchmarks for noncompliance are wide and varied. For example, if a serverless function has overly permissive access to another cloud service and an attacker gets access to the serverless function’s code via a vulnerability, the attack’s blast radius could exponentially increase as a result. Or, often compliance checks reveal how containers are run with inappropriate network settings. ... At a high level, these components and importantly, how they interact with each other, is why applications running in the cloud require time, effort and specialist expertise to secure them.

Daily Tech Digest - September 12, 2024

Navigating the digital economy: Innovation, risk, and opportunity

As we move towards the era of Industry 5.0, Digital Economy needs to adopt Human Centred Design (HCD) approach where technology layers revolve around the Human’s as the core. By 2030, it is envisaged to have Organoid Intelligence (OI) to rule the digital economy space with its potential across multi-disciplines with Super Intelligent capabilities. This capability shall democratize digital economy services across sectors in a seamless manner. This rapid technology adoption exposes the system to cyber risks which calls for advanced future security solutions such as Quantum Security embedded with digital currencies such as e-Rupee, crypto-currency, etc. ‘e-rupee’, a virtual equivalent of cash stored in a digital wallet, offers anonymity in payments. ... Indian banks are already piloting blockchain for issuing Letters of Credit, and integrating UPI with blockchain could combine the strengths of both systems, ensuring greater security, ease of use, and instant transactions. Such cyber security threats, also create opportunity for Bit-coin or Crypto-currencies to expand from its current offering towards sectors such as gaming, etc. 


From DevOps to Platform Engineering: Powering Business Success

Platform engineering provides a solution with the tools and frameworks needed to scale software delivery processes, ensuring that organizations can handle increasing workloads without sacrificing quality or speed. It also leads to improved consistency and reliability. By standardizing workflows and automating processes, platform engineering reduces the variability and risk associated with manual interventions. This leads to more consistent and reliable deployments, enhancing the overall stability of applications in production. Further productivity comes from the efficiency it offers developers themselves. Developers are most productive when they can focus on writing code and solving business problems. Platform engineering removes the friction associated with provisioning resources, managing environments, and handling operational tasks, allowing developers to concentrate on what they do best. It also provides the infrastructure and tools needed to experiment, iterate, and deploy new features rapidly, enabling organizations to stay ahead of the curve.


Scaling Databases To Meet Enterprise GenAI Demands

A hybrid approach combines vertical and horizontal scalability, providing flexibility and maximizing resource utilization. Organizations can begin with vertical scaling to enhance the performance of individual nodes and then transition to horizontal scaling as data volumes and processing demands increase. This strategy allows businesses to leverage their existing infrastructure while preparing for future growth — for example, initially upgrading servers to improve performance and then distributing the database across multiple nodes as the application scales. ... Data partitioning and sharding involve dividing large datasets into smaller, more manageable pieces distributed across multiple servers. This approach is particularly beneficial for vector databases, where partitioning data improves query performance and reduces the load on individual nodes. Sharding allows a vector database to handle large-scale data more efficiently by distributing the data across different nodes based on a predefined shard key. This ensures that each node only processes a subset of the data, optimizing performance and scalability.


Safeguarding Expanding Networks: The Role of NDR in Cybersecurity

NDR plays a crucial role in risk management by continuously monitoring the network for any unusual activities or anomalies. This real-time detection allows security teams to catch potential breaches early, often before they can cause serious damage. By tracking lateral movements within the network, NDR helps to contain threats, preventing them from spreading. Plus, it offers deep insights into how an attack occurred, making it easier to respond effectively and reduce the impact. ... When it comes to NDR, key stakeholders who benefit from its implementation include Security Operations Centre (SOC) teams, IT security leaders, and executives responsible for risk management. SOC teams gain comprehensive visibility into network traffic, which reduces false positives and allows them to focus on real threats, ultimately lowering stress and improving their efficiency. IT security leaders benefit from a more robust defence mechanism that ensures complete network coverage, especially in hybrid environments where both managed and unmanaged devices need protection.


Application detection and response is the gap-bridging technology we need

In the shared-responsibility model, not only is there the underlying cloud service provider (CSP) to consider, but there are external SaaS integrations and internal development and platform teams, as well as autonomous teams across the organization often leading to opaque systems with a lack of clarity around where responsibilities begin and end. On top of that, there are considerations around third-party dependencies, components, and vulnerabilities to address. Taking that further, the modern distributed nature of systems creates more opportunities for exploitation and abuse. One example is modern authentication and identity providers, each of which is a potential attack vector over which you have limited visibility due to not owning the underlying infrastructure and logging. Finally, there’s the reality that we’re dealing with an ever-increasing velocity of change. As the industry continues further adoption of DevOps and automation, software delivery cycles continue to accelerate. That trend is only likely to increase with the use of genAI-driven copilots. 


Data Is King. It Is Also Often Unlicensed or Faulty

A report published in the Nature Machine Intelligence journal presents a large-scale audit of dataset licensing and attribution in AI, analyzing over 1,800 datasets used in training AI models on platforms such as Hugging Face. The study revealed widespread miscategorization, with over 70% of datasets omitting licensing information and over 50% containing errors. In 66% of the cases, the licensing category was more permissive than intended by the authors. The report cautions against a "crisis in misattribution and informed use of popular datasets" that is driving recent AI breakthroughs but also raising serious legal risks. "Data that includes private information should be used with care because it is possible that this information will be reproduced in a model output," said Robert Mahari, co-author of the report and JD-PhD at MIT and Harvard Law School. In the vast ocean of data, licensing defines the legal boundaries of how data can be used. ... "The rise in restrictive data licensing has already caused legal battles and will continue to plague AI development with uncertainty," said Shayne Longpre, co-author of the report and research Ph.D. candidate at MIT. 


AI interest is driving mainframe modernization projects

AI and generative AI promise to transform the mainframe environment by delivering insights into complex unstructured data, augmenting human action with advances in speed, efficiency and error reduction, while helping to understand and modernize existing applications. Generative AI also has the potential to illuminate the inner workings of monolithic applications, Kyndryl stated. “Enterprises clearly see the potential with 86% of respondents confirmed they are deploying, or planning to deploy, generative AI tools and applications in their mainframe environments, while 71% say that they are already implementing generative AI-driven insights as part of their mainframe modernization strategy,” Kyndryl stated. ... While AI will likely shape the future for mainframes, a familiar subject remains a key driver for mainframe investments: security. “Given the ongoing threat from cyberattacks, increasing regulatory pressures, and an uptick in exposure to IT risk, security remains a key focus for respondents this year with almost half (49%) of the survey respondents cited security as the number one driver of their mainframe modernization investments in the year ahead,” Kyndryl stated.


How AI Is Propelling Data Visualization Techniques

AI has improved data processing and cleaning. AI identifies missing data and inconsistencies, which means we end up with more reliable datasets for effective visualization. Personalization is yet another benefit AI has brought. AI-powered tools can tailor visualizations based on set goals, context, and preferences. For example, a user can provide their business requirements, and AI will provide a customized chart and information layout based on these requirements. This saves time and can also be helpful when creativity isn’t flowing as well as we’d like. ... It’s useful for geographic data visualization in particular. While traditional maps provide a top-down perspective, AR mapping systems use existing mapping technologies, such as GPS, satellite images, and 3D models, and combine them with real-time data. For example, Google’s Lens in Maps feature uses AI and AR to help users navigate their surroundings by lifting their phones and getting instant feedback about the nearest points of interest. Business users will appreciate how AI automates insights with natural language generation (NGL). 


Framing The Role Of The Board Around Cybersecurity Is No Longer About Risk

Having set an unequivocal level of accountability with one executive for cybersecurity, the Board may want to revisit the history of the firm with regards to cyber protection, to ensure that mistakes are not repeated, that funding is sufficient and overall, that the right timeframes are set and respected, in particular over the mid to long-term horizon if large scale transformative efforts are required around cybersecurity. We start to see a list of topics emerging, broadly matching my earlier pieces, around the “key questions the Board should ask”, but more than ever, executive accountability is key in the face of current threats to start building up a meaningful and powerful top-down dialogue around cybersecurity. Readers may notice that I have not used the word “risk” even once in this article. Ultimately, risk is about things that may or may not happen: In the face of the “when-not-if” paradigm around cyber threats – and increasingly other threats as well – it is essential for the Board to frame and own business protection as a topic rooted in the reality of the world we live in, not some hypothetical matter which could be somehow mitigated, transferred or accepted.


Embracing First-Party Data in a Cookie-Alternative World

Unfortunately, the transition away from third-party cookies presents significant challenges that extend beyond shifting customer interactions. Many businesses are particularly concerned about the implications for data security and privacy. When looking into alternative data sources, businesses may inadvertently expose themselves to increased security risks. The shift to first-party data collection methods requires careful evaluation and implementation of advanced security measures to protect against data breaches and fraud. It is also crucial to ensure the transition is secure and compliant with evolving data privacy regulations. To ensure the data is secure, businesses should go beyond standard encryption practices and adopt advanced security measures such as tokenization for sensitive data fields, which minimizes the risk of exposing real data in the event of a breach. Additionally, regular security audits are crucial. Organizations should leverage automated tools for continuous security monitoring and compliance checks that can provide real-time alerts on suspicious activities, helping to preempt potential security incidents. 



Quote for the day:

“It's not about who wants you. It's about who values and respects you.” -- Unknown

Daily Tech Digest - June 26, 2024

How Developers Can Head Off Open Source Licensing Problems

There are proactive steps developers can take as well. For instance, developers can opt for code that isn’t controlled by a single vendor. “The other side, beyond the licensing, is to look and to understand who’s behind the license, the governance, policy,” he said. Another option to provide some cushion of protection is to use a vendor that specializes in distributing a particular open source solution. A distro vendor can provide indemnification against exposure, he said. They also provide other benefits, such as support and certification to run on specific hardware set-ups. Developers can also look for open source solutions that are under a foundation, rather than a single company, he suggested, although he cautioned that even that isn’t a failsafe measure. “Even foundations are not bulletproof,” he said. “Foundations provide some oversight, some governance and some other means to reduce the risk. But if ultimately, down the path, it ends up again being backed up by a single vendor, then it’s an issue even under a foundation.”


Line of Thought: A Primer on State-Sponsored Cyberattacks

A cyberattack may be an attractive avenue for a state actor and/or its affiliates since it may give them the ability to disrupt an adversary while maintaining plausible deniability.15 It may also reduce the risk of a retaliatory military strike by the victim.16 That’s because actually determining who was behind a cyberattack is notoriously difficult: attacks can be shrouded behind impersonated computers or hijacked devices and it may take months before actually discovering that an attack has occurred.17 Some APTs leverage an approach called “living off the land” which enables them to disguise an attack as ordinary network or system activities.18 Living off the land enabled one APT actor to reportedly enter network systems in America’s critical infrastructure and conduct espionage—reportedly with an eye toward developing capabilities to disrupt communications in the event of a crisis.19 The attack occurred sometime in 2021, but, due to the stealthy nature of living off the land techniques, wasn’t identified until 2023.


Taking a closer look at AI’s supposed energy apocalypse

Determining precisely how much of that data center energy use is taken up specifically by generative AI is a difficult task, but Dutch researcher Alex de Vries found a clever way to get an estimate. In his study "The growing energy footprint of artificial intelligence," de Vries starts with estimates that Nvidia's specialized chips are responsible for about 95 percent of the market for generative AI calculations. He then uses Nvidia's projected production of 1.5 million AI servers in 2027—and the projected power usage for those servers—to estimate that the AI sector as a whole could use up anywhere from 85 to 134 TWh of power in just a few years. To be sure, that is an immense amount of power, representing about 0.5 percent of projected electricity demand for the entire world (and an even greater ratio in the local energy mix for some common data center locations). But measured against other common worldwide uses of electricity, it's not representative of a mind-boggling energy hog. A 2018 study estimated that PC gaming as a whole accounted for 75 TWh of electricity use per year, to pick just one common human activity that's on the same general energy scale


Stepping Into the Attacker’s Shoes: The Strategic Power of Red Teaming

Red Teaming service providers are spending years preparing their infrastructure to conduct Red Teaming exercises. It is not feasible to quickly build a customized infrastructure for a specific customer; this requires prior development. Tailoring the service to a particular client can take anywhere from one to four months. During this period, preliminary exploration takes place. Red Teams use this time to identify and construct a combination of infrastructure elements that will not raise alarms among SOC defenders. ... The focus has shifted towards building a more layered defense, driven by Covid restrictions, remote work and the transition to the cloud. As companies enhance their defensive measures, there is a growing need to conduct Red Teaming projects to evaluate the effectiveness of these new systems and solutions. The risk of increased malicious insider activity has made the hybrid model increasingly relevant for many Red Teaming providers. This approach is neither a complete White Box, where detailed infrastructure information is provided upfront, nor traditional Red Teaming.


Six NFR strategies to improve software performance and security

Based on their analysis and discussions with developers, the researchers identified six key points: Prioritization and planning: NFRs should be treated with as much priority as other requirements. They should be planned in advance and reviewed throughout a development project. Identification and discussion: NFRs should be identified and discussed early in the development process, ideally in the design phase. During the evolution of the software, these NFRs should be revisited if necessary. Use of technologies allied with testing: The adequacy of the NFR can be verified through technologies already approved by the market, where the NFRs associated with those projects satisfy the project's complexity. Benchmarks: Using benchmarks to simulate the behavior of a piece of code or algorithm under different conditions is recommended, since it allows developers to review and refactor code when it is not meeting the project-specified NFRs. Documentation of best practices: By keeping the NFRs well-documented, developers will have a starting point to address any NFR problem when they appear.


Exploring the IT Architecture Profession

In IT architecture, it takes many years to gain the knowledge and skills required to be a professional architect. In my opinion, at the core of our profession are our knowledge and skills in technology. This is what we bring to the table; it is our knowledge and expertise in both business and technology that make the IT architecture profession unique. In addition to business and technology skills, it is essential that the architect possesses soft skills such as leadership, politics, and people management. These are often undervalued. When communicating IT architecture and what an IT architect does, I notice that there are a number of recurring aspects: scope, discipline, domain, and role. ... Perhaps the direction for the profession is to focus on gaining consensus around how we describe scope, domain and discipline rather than worrying too much about titles. An organisation should be able to describe a role from these aspects and describe the required seniority. At the end of the day, this was a thought-provoking exercise and with regards to my original problem, the categorisation of architecture views, I found that scope was perhaps the simplest way to organise the book.


Why collaboration is vital for achieving ‘automation nirvana’

Beeson says that one of the main challenges of implementing automation is getting different teams to collaborate on creating automation content. He explains that engineers and developers often have their own preferred programming language or tools and can be reluctant to share content or learn something new. “A lack of collaboration prevents the ‘automation nirvana’ of removing humans from complex processes, dramatically reducing automation benefits,” he says. “Individuals tend to be reluctant to contribute if they don’t have confidence in the automation tool or platform. “Automation content developers want the automation language to be easy to learn, compatible with their technology choices and provide control to ensure the content they contribute is not misused or modified.” ... When it comes to the future of automation, Beeson has no shortage of thoughts and predictions for the sector, especially relating to the role of automation in defence. “Defence is not immune from the ‘move to cloud’ trend, so hybrid cloud automation is becoming ever more prevalent in the sector,” he says


Securing the digital frontier: Crucial role of cybersecurity in digital transformation advisory

Advisory services have the expertise to perform in-depth technical security assessments to identify and help prioritize vulnerabilities in an organization’s infrastructure. These assessments include the use of specialised tools and manual testing to do a comprehensive assessment. Systems are examined to validate if they are following security best practices and prescribed industry standards. ... Advisors help organisations develop threat models to identify potential attack vectors and assess associated risks. Several methodologies like STRIDE, Kill Chain and PASTA are used to systematically analyse threats and risks. ... An organisation’s security is only as good as its weakest link, and generally, the weakest link is an individual of the organisation. Advisory services undertake regular training to educate and inform employees on security best practices. They can also support with simulation training such as phishing simulations and develop comprehensive security awareness programs that cover topics like secure password practices, data handling, data privacy, and incident reporting.


Delving Into the Risks and Rewards of the Open-Source Ecosystem

While some risk is inevitable, enterprise teams need to have an understanding of that risk and use open-source software accordingly. “As a CISO, the biggest risk I see is for organizations not to be intentional about how they use open-source software,” says Hawkins. “It's extremely valuable to build on top of these great projects, but when we do, we need to make sure we understand our dependencies. Including the evaluation of the open-source components as well as the internally developed components is key to being able to accurately [understand] our security posture.” ... So, it isn’t feasible to ditch open-source software, and risk is part of the deal. For enterprises, that reality necessitates risk management. And that need only increases as does reliance on open-source software. “As we move towards cloud and these kind of highly dynamic environments, our dependency on open-source is going up even higher than it ever did in the past,” says Douglas. If enterprise leaders shift how they view open-source software, they may be able to better reap its rewards while mitigating its risks.


Rethinking physical security creates new partner opportunities

Research conducted by Genetec has showed a 275% increase in the number of end users wanting to take more physical security workloads to the cloud. Research also indicates that many organisations aren’t treating SaaS and cloud as an ‘all or nothing’ proposition. However, while a hybrid-cloud infrastructure provides flexibility, it also has implications in being the gateway to the physical security cloud journey. Organisations needs to ensure that there are tools in place that can protect data regardless of their location. ... Organisations that aren’t able to keep up with the upgrade cycle often become subject to the consumption gap. This is where the end user can see the platform evolving with new features and functionality, but are unable to take advantage of all of it. The bigger the consumption gap, the more likely it’s to be holding the organisation back from physical security best practices. SaaS promises to close that gap because it keeps organisations on the latest software version. Importantly, their solution is updated in a way that is pre-approved by the organisation and on a timeframe of their choosing.



Quote for the day:

"Without growth, organizations struggle to add talented people. Without talented people, organizations struggle to grow." -- Ray Attiyah

Daily Tech Digest - April 14, 2024

Why small language models are the next big thing in AI

The complexity of tools and techniques required to work with LLMs also presents a steep learning curve for developers, further limiting accessibility. There is a long cycle time for developers, from training to building and deploying models, which slows down development and experimentation. ... Enter small language models. SLMs are more streamlined versions of LLMs, with fewer parameters and simpler designs. They require less data and training time—think minutes or a few hours, as opposed to days for LLMs. This makes SLMs more efficient and straightforward to implement on-site or on smaller devices. One of the key advantages of SLMs is their suitability for specific applications. Because they have a more focused scope and require less data, they can be fine-tuned for particular domains or tasks more easily than large, general-purpose models. This customization enables companies to create SLMs that are highly effective for their specific needs, such as sentiment analysis, named entity recognition, or domain-specific question answering. 


Navigating the AI revolution

Regulatory bodies like the European Union (EU) closely monitor data center energy usage through the Energy Efficiency Directive. This directive mandates that data center operators with a total rated power of 500 kilowatts or above are required to publicly report their energy performance data annually. An integral aspect of sustainability involves addressing 'Scope 3 emissions' under the Greenhouse Gas Protocol. While there’s a significant focus on Scope 1 and 2 responsibilities, which measure emissions from a data center’s own operations and electricity usage, Scope 3 emissions encompass the broader environmental impact of indirect emissions generated by data center operations outside its premises. ... The rise of AI technologies presents challenges and opportunities for the data center industry. By incorporating track busway solutions into data center infrastructure, operators can address the challenges posed by escalating power densities while contributing to significantly reducing Scope 3 emissions. This will position data centers to meet the demands of AI-driven workloads while ensuring their facilities’ continued reliability and energy efficiency.


Large language models generate biased content, warn researchers

Dr. Maria Perez Ortiz, an author of the report from UCL Computer Science and a member of the UNESCO Chair in AI at UCL team, said, "Our research exposes the deeply ingrained gender biases within large language models and calls for an ethical overhaul in AI development. As a woman in tech, I advocate for AI systems that reflect the rich tapestry of human diversity, ensuring they uplift rather than undermine gender equality." The UNESCO Chair in AI at UCL team will be working with UNESCO to help raise awareness of this problem and contribute to solution developments by running joint workshops and events involving relevant stakeholders: AI scientists and developers, tech organizations, and policymakers. Professor John Shawe-Taylor, lead author of the report from UCL Computer Science and UNESCO Chair in AI at UCL, said, "Overseeing this research as the UNESCO Chair in AI, it's clear that addressing AI-induced gender biases requires a concerted, global effort. This study not only sheds light on existing inequalities but also paves the way for international collaboration in creating AI technologies that honor human rights and gender equity. ..."


Enterprise of the Future: Disruptive Technology = Infinite Possibilities

In the current state, employees are required to navigate multiple enterprise application interfaces and browse many systems to get information for their day-to-day activities. It could be a simple activity such as fetching clarification on leave policies, checking for the leave balance or reporting an incident to have a laptop or printer issue fixed. To accomplish their daily responsibilities, employees often end up searching for standard operating procedures and other relevant data and knowledge. In general, to perform their day-to-day tasks, they must go through multiple enterprise application interfaces. This often results in a steep learning curve for the employees, necessitating them to remember where specific data and information resides, to understand the functionality of multiple enterprise applications and master the way they operate. This fragmentation of information across different data sources and the labyrinth of applications that need to be navigated to perform daily activities leads to inefficiencies and confusion that ultimately impact productivity. Moreover, changes within an organisation necessitate training on new systems and new ways of working, requiring employees to learn and adapt continuously.


The state of open source in Europe

Europe is renowned for regulations, and the past year has resulted in several large policy frameworks that influence tech — the Cyber Resilience Act (CRA), the Product Liability Directive, and the EU AI Act. With lots of information to digest and react to, both FOSDEM and SOOCON held deep-dive sessions. Over the past year, the CRA has been of the most concern to open-source communities, as it puts responsibility for harm caused by software into the hands of creators. For open-source software, this is complicated, as who is really responsible? The creator of the open-source software or its implementor? Many open-source projects have no legal entity that anyone can hold “responsible” for problems or harm. ... “‘Open-source software and hardware’ used to be enough to encompass the community and its aims and concerns. Now it’s ‘Open-source software, hardware, and data’.” An Open data movement, that aims to keep public data as freely accessible as possible, has existed for some time. The EU alone has nearly 2 million data sets. However, this past year saw the open-source community have to care about the openness of data in completely new ways.


Elemental Surprise: Physicists Discover a New Quantum State

“The search and discovery of novel topological properties of matter have emerged as one of the most sought-after treasures in modern physics, both from a fundamental physics point of view and for finding potential applications in next-generation quantum science and engineering,” said Hasan. “The discovery of this new topological state made in an elemental solid was enabled by multiple innovative experimental advances and instrumentations in our lab at Princeton.” An elemental solid serves as an invaluable experimental platform for testing various concepts of topology. Up until now, bismuth has been the only element that hosts a rich tapestry of topology, leading to two decades of intensive research activities. This is partly attributed to the material’s cleanliness and the ease of synthesis. However, the current discovery of even richer topological phenomena in arsenic will potentially pave the way for new and sustained research directions. “For the first time, we demonstrate that, akin to different correlated phenomena, distinct topological orders can also interact and give rise to new and intriguing quantum phenomena,” Hasan said.


The cloud is benefiting IT, but not business

According to a recent McKinsey survey that engaged about 50 European cloud leaders, the benefits of cloud migration still need to be recovered. In other words, cloud migrations are not as universally beneficial as we’ve been led to believe. I’m not sure why this is news to anyone. The central promise of cloud computing was to usher in a new era of agility, cost savings, and innovation for businesses. However, according to the McKinsey survey, only one-third of European companies actively monitor non-IT outcomes after migrating to the cloud, which suggests a less optimistic picture. Moreover, 71% of companies measured the impact of cloud adoption solely through the prism of IT operational improvements rather than core business benefits. This imbalance raises a critical question: Are the primary beneficiaries of cloud migration just the tech departments rather than the more comprehensive business entities they’re supposed to empower? Cloud computing technology is often associated with business agility and new revenue generation, but just 37% report cost savings outside of IT. Only 32% report new revenue generation despite having invested hundreds of millions of dollars in cloud computing.


Securing Mobile Apps: Development Strategies and Testing Methods

Ensuring secure data storage is crucial in today's technology landscape, especially for apps. It's vital to protect sensitive information and financial records to prevent unauthorized access and data breaches. Secure data storage includes encrypting information both at rest and in transit using encryption methods and secure storage techniques. Moreover, setting up access controls, authentication procedures, and conducting regular security checks are essential to uphold the confidentiality and integrity of stored data. By prioritizing these data storage practices and security protocols, developers can ensure that user information remains shielded from risks and vulnerabilities. Faulty encryption and flawed security measures can lead to vulnerabilities within apps, putting sensitive data at risk of unauthorized access and misuse. If encryption algorithms are weak or not implemented correctly, encrypted data could be easily decoded by actors. Poor key management, like storing encryption keys insecurely, worsens these threats. Additionally, security protocols lacking proper authentication or authorization controls create opportunities for attackers to bypass security measures.


How Do Open Source Licenses Work? The Ultimate Guide

The type of license a project creator chooses should be determined by legal counsel, as entering the realm of copyright law requires alignment with the project creators’ intentions. The type of license, which only qualified legal counsel can provide, should correspond to what the project creators want to achieve. Meanwhile, among popular licenses, the MIT License is comparatively permissive. It allows users to fork or copy the code freely, offering flexibility in how they utilize it. This stands in contrast to so-called “copyleft” licenses, like the GNU General Public License, which impose more stipulations. ... The process of changing a license can be complicated and challenging, emphasizing the importance of selecting the right license initially. When altering an open source project’s license, the new license must often comply with or be compatible with the original license, depending on its terms. Ensuring that the changes align with the copyright holders’ stakes in the project is crucial. This intricate process requires the guidance of competent legal counsel. It’s advisable to establish proper licensing from the project’s outset to avoid complexities later on.


6 Things That Will Tell You If You Are Destined for Leadership

Self-awareness about personality traits naturally leads to increased understanding and empathy when working with people who possess different traits than us. Instead of being unable to make sense of others' actions, we can analyze them and relate them to their inherent personality preference. This ability can prevent natural triggers towards the unknown from judging, blaming or taking things personally. ... Knowing about personality preferences is essential, but it's also crucial to be aware of how people prefer to handle change. Change is an event, but how change affects people is based on their personality traits. Some people embrace change so much they want it to be fast and vast. However, they tend to have difficulty staying focused and completing tasks. Others have an adverse reaction to change. They honor tradition and enjoy predictability.  ... Everyone has experienced negative situations, such as making the wrong choices, reacting to someone's words or behaviors with negative emotions or getting sucked into self-sabotaging thoughts. When you can bounce back from these situations with a positive mindset and be productive, you are exercising resilience.



Quote for the day:

"With desperation comes frustration. With determination comes purpose achievement, and peace." -- James A. Murphy

Daily Tech Digest - March 10, 2024

What’s the privacy tax on innovation?

A few decades ago, California had one of the strongest definitions for certifying Organic foods in the US. Eventually, the US government stepped in with a watered-down definition. Despite the pain of new privacy controls, the US data broker industry will lobby for a similar approach to at least harmonize privacy regulations at the Federal level that limit the impact on their business models when operating across state lines. For businesses and consumers, a more equitable approach would be to add a few more teeth to the cost of data misuse arising from legal sales, employee theft, or breaches. A few high-profile payouts arising from theft or when this data is used as part of multi-million dollar ransomware attacks on critical business systems would have a focusing effect on better privacy management practices. Another option is to turn to banks as holders of trust. Banks may be a good first point for managing the financial data we directly share with them. But what about all the data that others gather that may not be tied to traditional identifiers like social security numbers (SSN) used to unify data, such as IP addresses, phone numbers, Wi-Fi hubs, or the trail of GPS dots that gravitate to your home or office?


Living with the ghost of a smart home’s past

There were the window shades that always opened at 8AM and always closed at sundown. My brother disconnected everything that looked like a hub, and still, operating on some inaccessible internal clock, the shades carried on as they were once programmed to do. ... This is the state of home ownership in 2024! People have been making their homes smart with off-the-shelf parts for well over a decade now. Sometimes they sell those homes, and the new homeowners find themselves mired in troubleshooting when they should be trying to pick out wall colors. Some former homeowners will provide onboarding to the home’s smart home system, but most do as the guy who used to own my brother’s house did. They walk away and leave it as an adventure for the next person. ... I really hope the new renters of my old Brooklyn walk-up appreciate all the 2014 Philips Hue lights I left installed in the basement. There’s a calculus you make as you’re moving. It’s a hectic time, and there’s a lot to be done. Do you want to spend half the day freeing all those Hue bulbs from their obnoxious and broken recessed light housings, or do you want to leave a potential gift for the next homeowner and get started on nesting in your new place? 


Overcoming the AI Privacy Predicament

According to one study by Brookings, while 57% of consumers felt that AI will have a net negative impact on privacy, 34% were unsure about how AI would affect their privacy. Indeed, AI evokes a mixed set of thoughts and emotions in consumers. For most people, the promise of AI is clear: from increasing efficiency, to automating mundane tasks and freeing up more time for creative work, to improving outcomes in areas such as healthcare and education. ... In the realm of AI, the lack of trust is significant. Indeed, 81% of consumers think the information collected by AI companies will be used in ways people are uncomfortable with, as well as in ways that were not originally intended. That consumers are put in a seemingly impossible predicament regarding their privacy leaves them little choice but to a.) consent, or b.) forgo use of the product or service. Both choices leave consumers wanting more from the digital economy. When a new technology has negative implications for privacy, consumers have shown they are willing to engage in privacy-protective behaviors, such as deleting an app, withholding personal information, or abandoning an online purchase altogether.


How Static Analysis Can Save Your Software

While static analysis is a means of pattern detection, fixing an actual bug (for example, dereferencing a null pointer) is much harder, albeit possible. It becomes mathematically difficult to track exponentially increasing possible states. We call this “path explosion.” Say you’re writing code that, given two integers, divides one by the other, and there are various failure modes depending on the integers’ values. But what if the denominator is zero? That results in undefined behavior, and it means you need to look at where those integers came from, their possible values and what branches they took along the way. If you can see that the denominator is checked against zero before the division — and branches away if it is — you should be safe from division-by-zero issues. This theoretical stepping through stages of code is called “symbolic execution.” It’s not too complicated if the checkpoint is fairly close to the division process, but the further away it gets, the more branches you must account for. Crossing the function boundary gets even trickier. But once you have calls from other translation units, the problem becomes intractable in the general case. 


Avoiding Shift Left Exhaustion – Part 1

Shift left requires developers to be involved in testing, quality assurance, and collaboration throughout the development cycle. While this is undoubtedly beneficial for the final product, it can lead to an increased workload for developers who must balance their coding responsibilities with testing and problem-solving tasks. ... Adapting to Shift left practices often requires developers to acquire new skills and stay current with the latest testing methodologies and tools. This continuous learning can be intellectually stimulating and exhausting, especially in an industry that evolves rapidly. Developers must understand new tools, processes, and technologies as more things get moved earlier in the development lifecycle. ... The added pressure of early and continuous testing and the demand for faster development cycles can lead to developer burnout. When developers are overburdened, their creativity and productivity may suffer, ultimately impacting the software quality they produce. ... Shifting testing and quality assurance left in the development process may impose strict time constraints. Developers may feel pressured to meet tight deadlines, which can be stressful and lead to rushed decision-making, potentially compromising the software’s quality.


Ransomware Attacks on Critical Infrastructure Are Surging

Especially under fire are critical services. Healthcare and public health agencies dominated, filing 249 reports to IC3 last year over ransomware attacks, followed by 218 reports from critical manufacturing and 156 from government facilities. Ransomware-wielding attackers are potentially targeting these sectors most because they perceive the victims as having a proclivity to pay, given the risk to life or essential business processes posed by their systems being disrupted. Last year, IC3 received a ransomware report from at least one victim in all of the 16 critical infrastructure sectors - which include financial services, food and agriculture, energy and communications - except for two: dams and nuclear reactors, materials and waste. The ransomware group tied to the largest number of successful attacks against critical infrastructure reported to IC3 last year was LockBit, followed by Alphv/BlackCat, Akira, Royal and Black Basta. Law enforcement recently disrupted Alphv/BlackCat, as well as LockBit, after which each group separately claimed to have rebooted before appearing to go dark. 


What’s the missing piece for mainstream Web3 adoption?

Today’s Web3 lacks a unifying ecosystem, causing the market to fracture into multiple, independently evolving use cases. Crypto enthusiasts have to use various decentralized applications (DApps) and platforms to perform multiple transactions and interact with the different sectors of Web3. However, this isn’t a sustainable growth model for the Web3 industry and is more of a deterrent rather than a benefit when it comes to crypto adoption. ... Recognizing the need for a more integrated approach, some Web3 players are moving beyond the hype. Legion Network is emerging as a notable example among these. As a one-stop shop for Web3, Legion Network addresses the complexity of the industry and reaches new audiences. It brings together essential Web3 use cases, including a proprietary crypto wallet with comprehensive portfolio tracking, DeFi swaps and bridges, engaging play-to-earn/win games, captivating quests with prize rewards, a launchpad for emerging projects and a unique SocialFi experience that fosters community engagement.


What’s Driving Changes in Open Source Licensing?

In response to the challenges posed by cloud computing, some vendor-driven open source projects have changed their licenses or their GTM models. For example, MongoDB, Elastic, Confluent, Redis Labs and HashiCorp have adopted new licenses that restrict the use of their software-as-a-service by third parties or require them to pay fees or share their modifications. These changes are intended to protect the revenue and sustainability of the original vendors and to ensure that they can continue to invest in the open source project. However, these changes have also caused some controversy and backlash from the user community, who may feel that the project is becoming less open and more proprietary or that they are losing some of the benefits and freedoms of open source. However, community-driven open source projects have largely maintained their permissive licenses and their collaborative approach. These projects still benefit from the diversity and scale of their user community, who contribute to the development, maintenance, support and security of the software. These projects also leverage the support of organizations and foundations, such as the Linux Foundation, the Apache Software Foundation and the CNCF, who provide governance, funding and infrastructure. 


Botnets: The uninvited guests that just won’t leave

Reducing response time is vital. The longer the dwell time, the more likely it is that botnets can impact a business, particularly given that botnets can spread across many devices in a short period. How can security teams improve detection processes and shrink the time it takes to respond to malicious activity? Security practitioners should have multiple tools and strategies at their disposal to protect their organization’s networks against botnets. An obvious first step is to prevent access to all recognized C2 databases. Next, leverage application control to restrict unauthorized access to your systems. Additionally, use Domain Name System (DNS) filtering to target botnets explicitly, concentrating on each category or website that might expose your system to them. DNS filtering also helps to mitigate the Domain Generation Algorithms that botnets often use. Monitoring data while it enters and leaves devices is vital as well, as you can spot botnets as they attempt to infiltrate your computers or those connected to them. This is what makes security information and event management technology paired with malicious indicators of compromise detections so critical to protecting against bots. 


Are You Ready to Protect Your Company From Insider Threats? Probably Not

The real problem is that employees and employers don’t trust each other. This is an enormous risk for employees, as this environment makes it more likely that insider threats, security risks that originate from within the company, will emerge or intensify when tensions are high and motivations, including financial strain, dissatisfaction or desperation, drive individuals to act against their own organization. That’s the bad news. The worst news is that most companies are unprepared to meet the moment. ... Insider threats often betray their motivation. Sometimes, they tell colleagues about their intentions. Other times, their actions speak louder than words, as attempts to work around security protocols, active resentment for coworkers or leadership or general job dissatisfaction can be a red flag that an insider threat is about to act. Explaining the impact of human intelligence, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) writes, “An organization’s own personnel are an invaluable resource to observe behaviors of concern, as are those who are close to an individual, such as family, friends, and coworkers.”



Quote for the day:

"Leaders must be close enough to relate to others, but far enough ahead to motivate them." -- John C. Maxwell

Daily Tech Digest - January 27, 2024

The future of biometrics in a zero trust world

Nearly one in three CEOs and members of senior management have fallen victim to phishing scams, either by clicking on the same link or sending money. C-level executives are the primary targets for biometric and deep fake attacks because they are four times more likely to be victims of phishing than other employees, according to Ivanti’s State of Security Preparedness 2023 Report. Ivanti found that whale phishing is the latest digital epidemic to attack the C-suite of thousands of companies. ... In response to the increasing need for better biometric security globally, Badge Inc. recently announced the availability of its patented authentication technology that renders personal identity information (PII) and biometric credential storage obsolete. Badge also announced an alliance with Okta, the latest in a series of partnerships aimed at strengthening Identity and Access Management (IAM) for their shared enterprise customers. Srivastava explained how her company’s approach to biometrics eliminates the need for passwords, device redirects, and knowledge-based authentication (KBA). Badge supports an enroll once and authenticate on any device workflow that scales across an enterprise’s many threat surfaces and devices. 


Understanding CQRS Architecture

CRUD and CQRS are both tactical patterns, concentrating on the implementation specifics at the level of individual services. Therefore, asserting that an organization relies entirely on a CQRS architecture may not be entirely accurate. While certain services may adopt this architecture, it is typical for other services to employ simpler paradigms. The entire organization may not adhere to a unified style for all problems. The CRUD architecture assumes the existence of a single model for both read and update operations. CRUD operations are typically linked with traditional relational database systems, and numerous applications adopt a CRUD-based approach for data management. Conversely, the CQRS architecture assumes the presence of distinct models for queries and commands. While this paradigm is more intricate to implement and introduces certain subtleties, it provides the advantage of enabling stricter enforcement of data validation, implementation of robust security measures, and optimization of performance. These definitions may appear somewhat vague and abstract at the moment, but clarity will emerge as we delve into the details. It's important to note here that CQRS or CRUD should not be regarded as an overarching philosophy to be blindly applied in all circumstances. 


Role of Wazuh in building a robust cybersecurity architecture

Wazuh is a free and open source security solution that offers unified XDR and SIEM protection across several platforms. Wazuh protects workloads across virtualized, on-premises, cloud-based, and containerized environments to provide organizations with an effective approach to cybersecurity. By collecting data from multiple sources and correlating it in real-time, it offers a broader view of an organization's security posture. Wazuh plays a significant role in implementing a cyber security architecture, providing a platform for security information and event management, active response, compliance monitoring, and more. It provides flexibility and interoperability, enabling organizations to deploy Wazuh agents across diverse operating systems. Wazuh is equipped with a File Integrity Monitoring (FIM) module that helps detect file changes on monitored endpoints. It takes this a step further by combining the FIM module with threat detection rules and threat intelligence sources to detect malicious files allowing security analysts to stay ahead of the threat curve. Wazuh also provides out-of-the-box support for compliance frameworks like PCI DSS, HIPAA, GDPR, NIST SP 800-53, and TSC. 


Budget cuts loom for data privacy initiatives

In addition to difficulty understanding the privacy regulatory landscape, organizations also face other data privacy challenges, including budget. 43% of respondents say their privacy budget is underfunded and only 36% say their budget is appropriately funded. When looking at the year ahead, only 24% say that they expect budget will increase (down 10 points from last year), and only one percent say it will remain the same (down 26 points from last year). 51% expect a decrease in budget, which is significantly higher than last year when only 12% expected a decrease in budget. For those seeking resources, technical privacy positions are in highest demand, with 62% of respondents indicating there will be increased demand for technical privacy roles in the next year, compared to 55% for legal/compliance roles. However, respondents indicate there are skills gaps among these privacy professionals; they cite experience with different types of technologies and/or applications (63%) as the biggest one. When looking at common privacy failures, respondents pinpointed the lack of or poor training (49%), not practicing privacy by design (44%) and data breaches (42%) as the main concerns.


How to become a Chief Information Security Officer

In general, the CISO position is well-paid. Due to high demand and a limited talent pool, top-tier CISOs have commanded salaries in excess of $2.3 million. Nonetheless, executive remuneration may vary based on industry, company size and specifics of a role. The CISO typically manages a team of cyber security experts (sometimes multiple teams) and collaborates with high-level business stakeholders to facilitate the strategic development and completion of cyber security initiatives. ... While experience in cyber security does count for a lot, and while smart and talented people do ascend to the CISO role without extensive formal schooling, it can pay to get the right education. Most enterprises will expect that a potential CISO have a bachelor’s degree in computer science (or a similar discipline). There are exceptions, but an undergraduate degree is often used as a credibility benchmark. ... When it comes to real-world experience, most CISO roles require a minimum of five years’ time spent in the industry. A potential CISO should maintain broad knowledge of a variety of platforms and solutions, along with a strong understanding of both cyber security history and modern day cyber security threats.


I thought software subscriptions were a ripoff until I did the math

Selling perpetual licenses means you get a big surge in revenue with each new release. But then you have to watch that cash pile dwindle as you work on the next version and try to convince your customers to pay for the upgrade. If you want the opportunity to continually improve your software, you need to bring in enough revenue each year to justify the time and resources you spend on the project. That's the difference between a sustainable business and a hobby. It strikes me that the real objection to software as a subscription isn't to the business model, but rather to the price. If you think a fair price for a piece of software is closer to $50 than $500, and you should be able to use it in perpetuity, you're telling the developer that you're willing to pay them no more than a few bucks a month. They're trying to tell you that's not enough to sustain a software business, and maybe you should try a free, open-source option instead. All the developers that are migrating to a cloud-based subscription model are taking a necessary step to help ensure their long-term survival. The challenge for companies playing in this space is to make it crystal clear that their subscriptions offer real value


Filling the Cybersecurity Talent Gap

Thankfully, there is a talented group in the veteran community ready and willing to meet the challenge. Through their unique skills, discipline, and unmatched experience, veterans are perfectly suited to help address the talent gap and growing cyber threats we face. Not only that, but veterans will find that IT and cybersecurity provide a second career as they transition out of their service. Veterans leave service with a wide range of talents that have several applications outside of the military. This includes both what are often called "soft skills," or those that are beneficial in a number of settings, as well as technical abilities well-suited for cybersecurity and IT. ... As the industry continues to incorporate more secure by design principles that guide how we approach security and cyber resiliency, we need a workforce that understands the importance of security and defense. To make this a reality, we need both the government and private companies to step up and create the right pathways for veterans to enter the workforce. This can include expanding the GI Bill to add additional incentives for careers in cybersecurity. Private companies should also offer more hands-on workshops and training that can both provide a way for applicants to learn and help companies fill their open positions.


How Much Architecture Is “Enough?”: Balancing the MVP and MVA Helps You Make Better Decisions

The critical challenge that the MVA must solve is that it must answer the MVP’s current challenges while anticipating but not actually solving future challenges. In other words, the MVA must not require unacceptable levels of rework to actually solve those future problems. Some rework is okay and expected, but the words "complete rewrite" mean that the architecture has failed and all bets on viability are off. As a result of this, the MVA hangs in a dynamic balance between solving future problems that may never exist, and letting technical debt pile up to the point where it leads to, metaphorically, architectural bankruptcy. Being able to balance these two forces is where experience comes in handy. ... The development team creates the initial MVA based on their initial and often incomplete understanding of the problems the MVA needs to solve. They will not usually have much in the way of QARs, perhaps only broad organizational "standards" that are more aspirational than accurate. These initial statements are often so vague as to be unhelpful, e.g. "the system must support very large numbers of concurrent users", "the system must be easy to support and maintain", "the system must be secure against external threats", etc.


Group permission misconfiguration exposes Google Kubernetes Engine clusters

The problem is that in most other systems “authenticated users” are users that the administrators created or defined in the system. This is also the case in privately self-managed Kubernetes clusters or for the most part in clusters set up on other cloud services providers such as Azure or AWS. So, it’s not hard to see how some administrators might conclude that system:authenticated refers to a group of verified users and then decide to use it as an easy method to assign some permissions to all those trusted users. “GKE, in contrast to Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS), exposes a far-reaching threat since it supports both anonymous and full OpenID Connect (OIDC) access,” the Orca researchers said. “Unlike AWS and Azure, GCP’s managed Kubernetes solution considers any validated Google account as an authenticated entity. Hence, system:authenticated in GKE becomes a sensitive asset administrators should not overlook.” The Kubernetes API can integrate with many authentication systems and since access to Google Cloud Platform and all of Google’s services in general is done through Google accounts, it makes sense to also integrate GKE with Google’s IAM and OAuth authentication and authorization system.


Will the Rise of Generative AI Increase Technical Debt?

The rise of generative AI-related tools will likely increase technical debt, both due to the rush to hastily adopt new capabilities and the need to mold AI models to suit specific requirements. “New LLMs and generative AI applications will undoubtedly increase technical debt in the future, or at a minimum, greatly increase the need to manage that debt proactively,” said Quillin. “It starts with new requirements to continually manage, maintain, and nurture these models from a broad range of new KPIs from bias, concept drift, and shifting business, consumer, and environmental inputs and goals,” he said. Incorporating AI may require a significant upfront commitment, leading to additional technical debt. “It won’t be just a build-and-maintain scenario, but rather, the first of many steps on a long road ahead,” said Prince Kohli, CTO of Automation Anywhere. Product companies with a generative AI focus must invest in creating a data and model strategy, a data architecture to work with AI, controls for the AI and more. “Technology disruptions and pivots such as this always lead to this kind of technical debt that must be continually paid down, but it’s the price of admittance,” he said.



Quote for the day:

''The best preparation for tomorrow is doing your best today.'' -- H. Jackson