Showing posts with label data lakes. Show all posts
Showing posts with label data lakes. Show all posts

Daily Tech Digest - April 24, 2025


Quote for the day:

“Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability.” -- Patrick Lencioni



Algorithm can make AI responses increasingly reliable with less computational overhead

The algorithm uses the structure according to which the language information is organized in the AI's large language model (LLM) to find related information. The models divide the language information in their training data into word parts. The semantic and syntactic relationships between the word parts are then arranged as connecting arrows—known in the field as vectors—in a multidimensional space. The dimensions of space, which can number in the thousands, arise from the relationship parameters that the LLM independently identifies during training using the general data. ... Relational arrows pointing in the same direction in this vector space indicate a strong correlation. The larger the angle between two vectors, the less two units of information relate to one another. The SIFT algorithm developed by ETH researchers now uses the direction of the relationship vector of the input query (prompt) to identify those information relationships that are closely related to the question but at the same time complement each other in terms of content. ... By contrast, the most common method used to date for selecting the information suitable for the answer, known as the nearest neighbor method, tends to accumulate redundant information that is widely available. The difference between the two methods becomes clear when looking at an example of a query prompt that is composed of several pieces of information.


Bring Your Own Malware: ransomware innovates again

The approach taken by DragonForce and Anubis shows that cybercriminals are becoming increasingly sophisticated in the way they market their services to potential affiliates. This marketing approach, in which DragonForce positions itself as a fully-fledged service platform and Anubis offers different revenue models, reflects how ransomware operators behave like “real” companies. Recent research has also shown that some cybercriminals even hire pentesters to test their ransomware for vulnerabilities before deploying it. So it’s not just dark web sites or a division of tasks, but a real ecosystem of clear options for “consumers.” We may also see a modernization of dark web forums, which currently resemble the online platforms of the 2000s. ... Although these developments in the ransomware landscape are worrying, Secureworks researchers also offer practical advice for organizations to protect themselves. Above all, defenders must take “proactive preventive” action. Fortunately and unfortunately, this mainly involves basic measures. Fortunately, because the policies to be implemented are manageable; unfortunately, because there is still a lack of universal awareness of such security practices. In addition, organizations must develop and regularly test an incident response plan to quickly remediate ransomware activities.


Phishing attacks thrive on human behaviour, not lack of skill

Phishing draws heavily from principles of psychology and classic social engineering. Attacks often play on authority bias, prompting individuals to comply with requests from supposed authority figures, such as IT personnel, management, or established brands. Additionally, attackers exploit urgency and scarcity by sending warnings of account suspensions or missed payments, and manipulate familiarity by referencing known organisations or colleagues. Psychologs has explained that many phishing techniques bear resemblance to those used by traditional confidence tricksters. These attacks depend on inducing quick, emotionally-driven decisions that can bypass normal critical thinking defences. The sophistication of phishing is furthered by increasing use of data-driven tactics. As highlighted by TechSplicer, attackers are now gathering publicly available information from sources like LinkedIn and company websites to make their phishing attempts appear more credible and tailored to the recipient. Even experienced professionals often fall for phishing attacks, not due to a lack of intelligence, but because high workload, multitasking, or emotional pressure make it difficult to properly scrutinise every communication. 

What Steve Jobs can teach us about rebranding

Humans like to think of themselves as rational animals, but it comes as no news to marketers that we are motivated to a greater extent by emotions. Logic brings us to conclusions; emotion brings us to action. Whether we are creating a poem or a new brand name, we won’t get very far if we treat the task as an engineering exercise. True, names are formed by putting together parts, just as poems are put together with rhythmic patterns and with rhyming lines, but that totally misses what is essential to a name’s success or a poem’s success. Consider Microsoft and Apple as names. One is far more mechanical, and the other much more effective at creating the beginning of an experience. While both companies are tremendously successful, there is no question that Apple has the stronger, more emotional experience. ... Different stakeholders care about different things. Employees need inspiration; investors need confidence; customers need clarity on what’s in it for them. Break down these audiences and craft tailored messages for each group. Identifying the audience groups can be challenging. While the first layer is obvious—customers, employees, investors, and analysts—all these audiences are easy to find and message. However, what is often overlooked is the individuals in those audiences who can more positively influence the rebrand. It may be a particular journalist, or a few select employees. 


Coaching AI agents: Why your next security hire might be an algorithm

Like any new team member, AI agents need onboarding before operating at maximum efficacy. Without proper onboarding, they risk misclassifying threats, generating excessive false positives, or failing to recognize subtle attack patterns. That’s why more mature agentic AI systems will ask for access to internal documentation, historical incident logs, or chat histories so the system can study them and adapt to the organization. Historical security incidents, environmental details, and incident response playbooks serve as training material, helping it recognize threats within an organization’s unique security landscape. Alternatively, these details can help the agentic system recognize benign activity. For example, once the system knows what are allowed VPN services or which users are authorized to conduct security testing, it will know to mark some alerts related to those services or activities as benign. ... Adapting AI isn’t a one-time event, it’s an ongoing process. Like any team member, agentic AI deployments improve through experience, feedback, and continuous refinement. The first step is maintaining human-in-the-loop oversight. Like any responsible manager, security analysts must regularly review AI-generated reports, verify key findings, and refine conclusions when necessary. 


Cyber insurance is no longer optional, it’s a strategic necessity

Once the DPDPA fully comes into effect, it will significantly alter how companies approach data protection. Many enterprises are already making efforts to manage their exposure, but despite their best intentions, they can still fall victim to breaches. We anticipate that the implementation of DPDPA will likely lead to an increase in the uptake of cyber insurance. This is because the Act clearly outlines that companies may face penalties in the event of a data breach originating from their environment. Since cyber insurance policies often include coverage for fines and penalties, this will become an increasingly important risk-transfer tool. ... The critical question has always been: how can we accurately quantify risk exposure? Specifically, if a certain event were to occur, what would be the financial impact? Today, there are advanced tools and probabilistic models available that allow organisations to answer this question with greater precision. Scenario analyses can now be conducted to simulate potential events and estimate the resulting financial impact. This, in turn, helps enterprises determine the appropriate level of insurance coverage, making the process far more data-driven and objective. Post-incident technology also plays a crucial role in forensic analysis. When an incident occurs, the immediate focus is on containment. 


Adversary-in-the-Middle Attacks Persist – Strategies to Lessen the Impact

One of the most recent examples of an AiTM attack is the attack on Microsoft 365 with the PhaaS toolkit Rockstar 2FA, an updated version of the DadSec/Phoenix kit. In 2024, a Microsoft employee accessed an attachment that led them to a phony website where they authenticated the attacker’s identity through the link. In this instance, the employee was tricked into performing an identity verification session, which granted the attacker entry to their account. ... As more businesses move online, from banks to critical services, fraudsters are more tempted by new targets. The challenges often depend on location and sector, but one thing is clear: Fraud operates without limitations. In the United States, AiTM fraud is progressively targeting financial services, e-commerce and iGaming. For financial services, this means that cybercriminals are intercepting transactions or altering payment details, inducing hefty losses. Concerning e-commerce and marketplaces, attackers are exploiting vulnerabilities to intercept and modify transactions through data manipulation, redirecting payments to their accounts. ... As technology advances and fraud continues to evolve with it, we face the persistent challenge of increased fraudster sophistication, threatening businesses of all sizes. 


From legacy to lakehouse: Centralizing insurance data with Delta Lake

Centralizing data and creating a Delta Lakehouse architecture significantly enhances AI model training and performance, yielding more accurate insights and predictive capabilities. The time-travel functionality of the delta format enables AI systems to access historical data versions for training and testing purposes. A critical consideration emerges regarding enterprise AI platform implementation. Modern AI models, particularly large language models, frequently require real-time data processing capabilities. The machine learning models would target and solve for one use case, but Gen AI has the capability to learn and address multiple use cases at scale. In this context, Delta Lake effectively manages these diverse data requirements, providing a unified data platform for enterprise GenAI initiatives. ... This unification of data engineering, data science and business intelligence workflows contrasts sharply with traditional approaches that required cumbersome data movement between disparate systems (e.g., data lake for exploration, data warehouse for BI, separate ML platforms). Lakehouse creates a synergistic ecosystem, dramatically accelerating the path from raw data collection to deployed AI models generating tangible business value, such as reduced fraud losses, faster claims settlements, more accurate pricing and enhanced customer relationships.


How AI and Data-Driven Decision Making Are Reshaping IT Ops

Rather than relying on intuition, IT decision-makers now lean on insights drawn from operational data, customer feedback, infrastructure performance, and market trends. The objective is simple: make informed decisions that align with broader business goals while minimizing risk and maximizing operational efficiency. With the help of analytics platforms and business intelligence tools, these insights are often transformed into interactive dashboards and visual reports, giving IT teams real-time visibility into performance metrics, system anomalies, and predictive outcomes. A key evolution in this approach is the use of predictive intelligence. Traditional project and service management often fall short when it comes to anticipating issues or forecasting success. ... AI also helps IT teams uncover patterns that are not immediately visible to the human eye. Predictive models built on historical performance data allow organizations to forecast demand, manage workloads more efficiently, and preemptively resolve issues before they disrupt service. This shift not only reduces downtime but also frees up resources to drive innovation across the enterprise. Moreover, companies that embrace data as a core business asset tend to nurture a culture of curiosity and informed experimentation. 


The DFIR Investigative Mindset: Brett Shavers On Thinking Like A Detective

You must be technical. You have to be technically proficient. You have to be able to do the actual technical work. And I’m not to rely on- not to bash a vendor training for a tool training, you have to have tool training, but you have to have exact training on “This is what the registry is, this is how you pull the-” you have to have that information first. The basics. You gotta have the basics, you have the fundamentals. And a lot of people wanna skip that. ... The DF guys, it’s like a criminal case. It’s “This is the computer that was in the back of the trunk of a car, and that’s what we got.” And the IR side is “This is our system and we set up everything and we can capture what we want. We can ignore what we want.” So if you’re looking at it like “Just in case something is gonna be criminal we might want to prepare a little bit,” right? So that makes DF guys really happy. If they’re coming in after the fact of an IR that becomes a case, a criminal case or a civil litigation where the DF comes in, they go, “Wow, this is nice. You guys have everything preserved, set up as if from the start you were prepared for this.” And it’s “We weren’t really prepared. We were prepared for it, we’re hoping it didn’t happen, we got it.” But I’ve walked in where drives are being wiped on a legal case. 


Daily Tech Digest - January 06, 2025

Should States Ban Mandatory Human Microchip Implants?

“U.S. states are increasingly enacting legislation to pre-emptively ban employers from forcing workers to be ‘microchipped,’ which entails having a subdermal chip surgically inserted between one’s thumb and index finger," wrote the authors of the report. "Internationally, more than 50,000 people have elected to receive microchip implants to serve as their swipe keys, credit cards, and means to instantaneously share social media information. This technology is especially popular in Sweden, where chip implants are more widely accepted to use for gym access, e-tickets on transit systems, and to store emergency contact information.” ... “California-based startup Science Corporation thinks that an implant using living neurons to connect to the brain could better balance safety and precision," Singularity Hub wrote. "In recent non-peer-reviewed research posted on bioarXiv, the group showed a prototype device could connect with the brains of mice and even let them detect simple light signals.” That same piece quotes Alan Mardinly, who is director of biology at Science Corporation, as saying that the advantages of a biohybrid implant are that it "can dramatically change the scaling laws of how many neuros you can interface with versus how much damage you do to the brain."


AI revolution drives demand for specialized chips, reshaping global markets

There’s now a shift toward smaller AI models that only use internal corporate data, allowing for more secure and customizable genAI applications and AI agents. At the same time, Edge AI is taking hold, because it allows AI processing to happen on devices (including PCs, smartphones, vehicles and IoT devices), reducing reliance on cloud infrastructure and spurring demand for efficient, low-power chips. “The challenge is if you’re going to bring AI to the masses, you’re going to have to change the way you architect your solution; I think this is where Nvidia will be challenged because you can’t use a big, complex GPU to address endpoints,” said Mario Morales, a group vice president at research firm IDC. “So, there’s going to be an opportunity for new companies to come in — companies like Qualcomm, ST Micro, Renesas, Ambarella and all these companies that have a lot of the technology, but now it’ll be about how to use it. ... Enterprises and other organizations are also shifting their focus from single AI models to multimodal AI, or LLMs capable of processing and integrating multiple types of data or “modalities,” such as text, images, audio, video, and sensory input. The input from diverse resources creates a more comprehensive understanding of that data and enhances performance across tasks.


How to Address an Overlooked Aspect of Identity Security: Non-human Identities

Compromised identities and credentials are the No. 1 tactic for cyber threat actors and ransomware campaigns to break into organizational networks and spread and move laterally. Identity is the most vulnerable element in an organization’s attack surface because there is a significant misperception around what identity infrastructure (IDP, Okta, and other IT solutions) and identity security providers (PAM, MFA, etc.) can protect. Each solution only protects the silo that it is set up to secure, not an organization’s complete identity landscape, including human and non-human identities (NHIs), privileged and non-privileged users, on-prem and cloud environments, IT and OT infrastructure, and many other areas that go unmanaged and unprotected. ... Most organizations use a combination of on-prem management tools, a mix of one or more cloud identity providers (IdPs), and a handful of identity solutions (PAM, IGA) to secure identities. But each tool operates in a silo, leaving gaps and blind spots that cause increased attacks and blind spots. 8 out of 10 organizations cannot prevent the misuse of service accounts in real-time due to visibility and security being sporadic or missing. NHIs fly under the radar as security and identity teams sometimes don’t even know they exist. 


Version Control in Agile: Best Practices for Teams

With multiple developers working on different features, fixes, or updates simultaneously, it’s easy for code to overlap or conflict without clear guidelines. Having a structured branching approach prevents confusion and minimizes the risk of one developer’s work interfering with another’s. ... One of the cornerstones of good version control is making small, frequent commits. In Agile development, progress happens in iterations, and version control should follow that same mindset. Large, infrequent commits can cause headaches when it’s time to merge, increasing the chances of conflicts and making it harder to pinpoint the source of issues. Small, regular commits, on the other hand, make it easier to track changes, test new functionality, and resolve conflicts early before they grow into bigger problems. ... An organized repository is crucial to maintaining productivity. Over time, it’s easy for the repository to become cluttered with outdated branches, unnecessary files, or poorly named commits. This clutter slows down development, making it harder for team members to navigate and find what they need. Teams should regularly review their repositories and remove unused branches or files that are no longer relevant. 


Abusing MLOps platforms to compromise ML models and enterprise data lakes

Machine learning operations (MLOps) is the practice of deploying and maintaining ML models in a secure, efficient and reliable way. The goal of MLOps is to provide a consistent and automated process to be able to rapidly get an ML model into production for use by ML technologies. ... There are several well-known attacks that can be performed against the MLOps lifecycle to affect the confidentiality, integrity and availability of ML models and associated data. However, performing these attacks against an MLOps platform using stolen credentials has not been covered in public security research. ... Data poisoning: This attack involves an attacker having access to the raw data being used in the “Design” phase of the MLOps lifecycle to include attacker-provided data or being able to directly modify a training dataset. The goal of a data poisoning attack is to be able to influence the data that is being trained in an ML model and eventually deployed to production. ... Model extraction attacks involve the ability of an attacker to steal a trained ML model that is deployed in production. An attacker could use a stolen model to extract sensitive training data such as the training weights used, or to use the predictive capabilities used in the model for their own financial gain. 


Get Going With GitOps

GitOps implementations have a significant impact on infrastructure automation by providing a standardized, repeatable process for managing infrastructure as code, Rose says. The approach allows faster, more reliable deployments and simplifies the maintenance of infrastructure consistency across diverse environments, from development to production. "By treating infrastructure configurations as versioned artifacts in Git, GitOps brings the same level of control and automation to infrastructure that developers have enjoyed with application code." ... GitOps' primary benefit is its ability to enable peer review for configuration changes, Peele says. "It fosters collaboration and improves the quality of application deployment." He adds that it also empowers developers -- even those without prior operations experience -- to control application deployment, making the process more efficient and streamlined. Another benefit is GitOps' ability to allow teams to push minimum viable changes more easily, thanks to faster and more frequent deployments, says Siri Varma Vegiraju, a Microsoft software engineer. "Using this strategy allows teams to deploy multiple times a day and quickly revert changes if issues arise," he explains via email. 


Balancing proprietary and open-source tools in cyber threat research

First, it is important to assess the requirements of an organization by identifying the capabilities needed, such as threat intelligence platforms or malware analysis tools. Next, evaluating open-source tools which can be cost-effective and customizable, but may require community support and frequent updates. In contrast, proprietary tools could offer advanced features, dedicated support, and better integration with other products. Finally, think about scalability and flexibility, as future growth may necessitate scalable solutions. ... The technology is not magic, but it is a powerful tool to speed up processes and bolster security procedures while also reducing the gap between advanced and junior analysts. However, as of today, the technology still requires verification and validation. Globally, the need for security experts with a dual skill set in security and AI will be in high demand. Because the adoption of generative AI systems increases, we need people who understand these technologies because threat actors are also learning. ... If a CISO needs to evaluate effectiveness of these tools, they first need to understand their needs and pain points and then seek guidance from experts. Adopting generative AI security solutions just because it is the latest trend is not the right approach.


Get your IT infrastructure AI-ready

Artificial intelligence adoption is a challenge many CIOs grapple with as they look to the future. Before jumping in, their teams must possess practical knowledge, skills, and resources to implement AI effectively. ... AI implementation is costly and the training of AI models requires a substantial investment. "To realize the potential, you have to pay attention to what it's going to take to get it done, how much it's going to cost, and make sure you're getting a benefit," Ramaswami said. "And then you have to go get it done." GenAI has rapidly transformed from an experimental technology to an essential business tool, with adoption rates more than doubling in 2024, according to a recent study by AI at Wharton ... According to Donahue, IT teams are exploring three key elements: choosing language models, leveraging AI from cloud services, and building a hybrid multicloud operating model to get the best of on-premise and public cloud services. "We're finding that very, very, very few people will build their own language model," he said. "That's because building a language model in-house is like building a car in the garage out of spare parts." Companies look to cloud-based language models, but must scrutinize security and governance capabilities while controlling cost over time. 


What is an EPMO? Your organization’s strategy navigator

The key is to ensure the entire strategy lifecycle is set up for success rather than endlessly iterating to perfect strategy execution. Without properly defining, governing, and prioritizing initiatives upfront, even the best delivery teams will struggle to achieve business goals in a way that drives the right return for the organization’s investment. For most organizations, there’s more than one gap preventing desired results. ... The EPMO’s job is to strip away unnecessary complexity and create frameworks that empower teams to deliver faster, more effectively, and with greater focus. PMO leaders should ask how this process helps to hit business goals faster. So by eliminating redundant meetings and scaling governance to match project size and risk, delivery timelines can shorten. This kind of targeted adjustment keeps momentum high without sacrificing quality or control. ... For an EPMO to be effective, ideally it needs to report directly to the C-suite. This matters because proximity equals influence. When the EPMO has visibility at the top, it can drive alignment across departments, break down silos, drive accountability, and ensure initiatives stay connected to overall business objectives serving as the strategy navigator for the C-suite.


Data Center Hardware in 2025: What’s Changing and Why It Matters

DPUs can handle tasks like network traffic management, which would otherwise fall to CPUs. In this way, DPUs reduce the load placed on CPUs, ultimately making greater computing capacity available to applications. DPUs have been around for several years, but they’ve become particularly important as a way of boosting the performance of resource-hungry workloads, like AI training, by completing AI accelerators. This is why I think DPUs are about to have their moment. ... Recent events have underscored the risk of security threats linked to physical hardware devices. And while I doubt anyone is currently plotting to blow up data centers by placing secret bombs inside servers, I do suspect there are threat actors out there vying to do things like plant malicious firmware on servers as a way of creating backdoors that they can use to hack into data centers. For this reason, I think we’ll see an increased focus in 2025 on validating the origins of data center hardware and ensuring that no unauthorized parties had access to equipment during the manufacturing and shipping processes. Traditional security controls will remain important, too, but I’m betting on hardware security becoming a more intense area of concern in the year ahead.



Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous

Daily Tech Digest - September 15, 2024

Data Lakes Evolve: Divisive Architecture Fuels New Era of AI Analytics

“Data lakes led to the spectacular failure of big data. You couldn’t find anything when they first came out,” Sanjeev Mohan, principal at the SanjMo tech consultancy, told Data Center Knowledge. There was no governance or security, he said. What was needed were guardrails, Mohan explained. That meant safeguarding data from unauthorized access and respecting governance standards such as GDPR. It meant applying metadata techniques to identify data. “The main need is security. That calls for fine-grained access control – not just throwing files into a data lake,” he said, adding that better data lake approaches can now address this issue. Now, different personas in an organization are reflected in different permissions settings. ... This type of control was not standard with early data lakes, which were primarily “append-only” systems that were difficult to update. New table formats changed this. Table formats like Delta Lake, Iceberg, and Hudi have emerged in recent years, introducing significant improvements in data update support. For his part, Sanjeev Mohan said standardization and wide availability of tools like Iceberg give end-users more leverage when selecting systems. 


Data at the Heart of Digital Transformation: IATA's Story

It's always good to know what the business goals are, from a strategic perspective, which informs the data that is needed to enable digital transformation. Data is at the heart of digital transformation. Business strategy comes first and then data strategy, followed by technology strategy. At IATA, we formed the Data Steering Group and identified critical datasets across the organization. We then set up a data catalog and established a governance structure. This was followed by the launch of the Data Governance Committee and the role of a chief data officer. We're going to be implementing an automated data catalog and some automation tools around data quality. Data governance has allowed us to break down data silos. It has also enabled us to establish IATA's industry data strategy. We treat data as an asset, and that data is not owned by any particular division but looked at holistically at the organizational level. And that has allowed us opportunities to do some exciting things in the AI and analytics space and even in the way we deal with our third-party data suppliers and member airlines.


New Android Warning As Hackers Install Backdoor On 1.3 Million TV Boxes

"This is a clear example of how IoT devices can be exploited by malicious actors,” Ray Kelly, fellow at the Synopsys Software Integrity Group, said, “the ability of the malware to download arbitrary apps opens the door to a range of potential threats.” Everything from a TV box botnet for use in distributed denial of service attacks through to stealing account credentials and personal information. Responsibility for protecting users lies with the manufacturers, Kelly said, they must “ensure their products are thoroughly tested for security vulnerabilities and receive regular software updates.” "These off-brand devices discovered to be infected were not Play Protect certified Android devices,” a Google spokesperson said, “If a device isn't Play Protect certified, Google doesn’t have a record of security and compatibility test results.” Whereas these Play Protect certified devices have undergone testing to ensure both quality and user safety, other boxes may not have done. “To help you confirm whether or not a device is built with Android TV OS and Play Protect certified, our Android TV website provides the most up-to-date list of partners,” the spokesperson said.


Engineers Day: Top 5 AI-powered roles every engineering graduate should consider

Generative AI engineer: They play a pivotal role in analysing vast datasets to extract actionable insights and drive data-informed decision-making processes. This role demands a comprehensive understanding of statistical analysis, machine learning techniques, and programming languages such as Python and R. ... AI research scientist: They are at the forefront of advancing AI technologies through groundbreaking research and innovation. With a robust mathematical background, professionals in this role delve into programming languages such as Python and C++, harnessing the power of deep learning, natural language processing, and computer vision to develop cutting-edge solutions. ... Machine Learning engineer: Machine learning engineers are tasked with developing cutting-edge machine learning models and algorithms to address complex problems across various industries. To excel in this role, professionals must develop a strong proficiency in programming languages such as Python, along with a deep understanding of machine learning frameworks like TensorFlow and PyTorch. Expertise in data preprocessing techniques and algorithm development is also quite crucial here. 


Kubernetes attacks are growing: Why real-time threat detection is the answer for enterprises

Attackers are ruthless in pursuing the weakest threat surface of an attack vector, and with Kubernetes containers runtime is becoming a favorite target. That’s because containers are live and processing workloads during the runtime phase, making it possible to exploit misconfigurations, privilege escalations or unpatched vulnerabilities. This phase is particularly attractive for crypto-mining operations where attackers hijack computing resources to mine cryptocurrency. “One of our customers saw 42 attempts to initiate crypto-mining in their Kubernetes environment. Our system identified and blocked all of them instantly,” Gil told VentureBeat. Additionally, large-scale attacks, such as identity theft and data breaches, often begin once attackers gain unauthorized access during runtime where sensitive information is used and thus more exposed. Based on the threats and attack attempts CAST AI saw in the wild and across their customer base, they launched their Kubernetes Security Posture Management (KSPM) solution this week. What is noteworthy about their approach is how it enables DevOps operations to detect and automatically remediate security threats in real-time. 


Begun, the open source AI wars have

Open source leader julia ferraioli agrees: "The Open Source AI Definition in its current draft dilutes the very definition of what it means to be open source. I am absolutely astounded that more proponents of open source do not see this very real, looming risk." AWS principal open source technical strategist Tom Callaway said before the latest draft appeared: "It is my strong belief (and the belief of many, many others in open source) that the current Open Source AI Definition does not accurately ensure that AI systems preserve the unrestricted rights of users to run, copy, distribute, study, change, and improve them." ... Afterwards, in a more sorrowful than angry statement, Callaway wrote: "I am deeply disappointed in the OSI's decision to choose a flawed definition. I had hoped they would be capable of being aspirational. Instead, we get the same excuses and the same compromises wrapped in a facade of an open process." Chris Short, an AWS senior developer advocate, Open Source Strategy & Marketing, agreed. He responded to Callaway that he: "100 percent believe in my soul that adopting this definition is not in the best interests of not only OSI but open source at large will get completely diluted."


What North Korea’s infiltration into American IT says about hiring

Agents working for the North Korean government use stolen identities of US citizens, create convincing resumes with generative AI (genAI) tools, and make AI-generated photos for their online profiles. Using VPNs and proxy servers to mask their actual locations — and maintaining laptop farms run by US-based intermediaries to create the illusion of domestic IP addresses — the perpetrators use either Western-based employees for online video interviews or, less successfully, real-time deepfake videoconferencing tools. And they even offer up mailing addresses for receiving paychecks. ... Among her assigned tasks, Chapman maintained a PC farm of computers used to simulate a US location for all the “workers.” She also helped launder money paid as salaries. The group even tried to get contractor positions at US Immigration and Customs Enforcement and the Federal Protective Services. (They failed because of those agencies’ fingerprinting requirements.) They did manage to land a job at the General Services Administration, but the “employee” was fired after the first meeting. A Clearwater, FL IT security company called KnowBe4 hired a man named “Kyle” in July. But it turns out that the picture he posted on his LinkedIn account was a stock photo altered with AI. 


Contesting AI Safety

The dangers posed by these machines arise from the idea that they “transcend some of the limitations of their designers.” Even if rampant automation and unpredictable machine behavior may destroy us, the same technology promises unimaginable benefits in the far future. Ahmed et al. describe this epistemic culture of AI safety that drives much of today’s research and policymaking, focused primarily on the technical problem of aligning AI. This culture traces back to the cybernetics and transhumanist movements. In this community, AI safety is understood in terms of existential risks—unlikely but highly impactful events, such as human extinction. The inherent conflict between a promised utopia and cataclysmic ruin characterizes this predominant vision for AI safety. Both the AI Bill of Rights and SB 1047 assert claims about what constitutes a safe AI model but fundamentally disagree on the definition of safety. A model deemed safe under SB 1047 might not satisfy the Safe and Effective principle of the White House AI Blueprint; a model that follows the AI Blueprint could cause critical harm. What does it truly mean for AI to be safe? 


Why Companies Should Embrace Ethical Hackers

Security researchers (or hackers, take your pick) are generally good people motivated by curiosity, not malicious intent. Making guesses, taking chances, learning new things, and trying and failing and trying again is fun. The love of the game and ethical principles are two separate things, but many researchers have both in spades. Unfortunately, the government has historically sided with corporations. Scared by the Matthew Broderick movie WarGames plot, Ronald Reagan initiated legislation that resulted in the Computer Fraud and Abuse Act of 1986 (CFAA). Good-faith researchers have been haunted ever since. Then there is The Digital Millennium Copyright Act (DMCA) of 1998, which made it explicitly illegal to “circumvent a technological measure that effectively controls access to a work protected under [copyright law],” something necessary to study many products. A narrow harbor for those engaging in encryption research was carved out in the DMCA, but otherwise, the law put researchers further in danger of legal action against them. All this naturally had a chilling effect as researchers grew tired of being abused for doing the right thing. Many researchers stopped bothering with private disclosures to companies with vulnerable products and took their findings straight to the public. 


Why AI Isn't Just Hype - But A Pragmatic Approach Is Required

It is far better to take a pragmatic view where you open yourself up to the possibilities but proceed with both caution and some help. That must start with working through the buzzwords and trying to understand what people mean, at least at a top level, by an LLM or a vector search or maybe even a Naive Bayes algorithm. But then, it is also important to bring in a trusted partner to help you move to the next stage to build an amazing new digital product, or to undergo a digital transformation with an existing digital product. Whether you’re in start-up mode, you are already a scale-up with a new idea, or you’re a corporate innovator looking to diversify with a new product – whatever the case, you don’t want to waste time learning on the job, and instead want to work with a small, focused team who can deliver exceptional results at the speed of modern digital business. ... Whatever happens or doesn’t happen to GenAI, as an enterprise CIO you are still going to want to be looking for tech that can learn and adapt from circumstance and so help you do the same. At the end of the day, hype cycle or not, AI is really the one tool in the toolbox that can continuously work with you to analyse data in the wild and in non-trivial amounts.



Quote for the day:

"Your attitude is either the lock on or key to your door of success." -- Denis Waitley

Daily Tech Digest - July 22, 2024

AI regulation in peril: Navigating uncertain times

Existing laws are often vague in many fields, including those related to the environment and technology, leaving interpretation and regulation to the agencies. This vagueness in legislation is often intentional, for both political and practical reasons. Now, however, any regulatory decision by a federal agency based on those laws can be more easily challenged in court, and federal judges have more power to decide what a law means. This shift could have significant consequences for AI regulation. Proponents argue that it ensures a more consistent interpretation of laws, free from potential agency overreach. However, the danger of this ruling is that in a fast-moving field like AI, agencies often have more expertise than the courts. ... The judicial branch has no such existing expertise. Nevertheless, the majority opinion said that “…agencies have no special competence in resolving statutory ambiguities. Courts do.” ... Going forward, then, when passing a new law affecting the development or use of AI, if Congress wished for federal agencies to lead on regulation, they would need to state this explicitly within the legislation. Otherwise, that authority would reside with the federal courts. 


Fostering Digital Trust in India's Digital Transformation journey

In this era where digital interactions dominate, trust is the anchor for building resilient organizations and stronger relationships with stakeholders and customers. As per ISACA’s State of Digital Trust 2023 research, 90 percent of respondents in India say digital trust is important and 89 percent believe its importance will increase in the next five years. Nowhere is this truer than in India, the world’s largest digitally connected democracy and a burgeoning hub of digital innovation and transformation. ... A key hurdle in building and maintaining digital trust in most countries is the absence of a standardized conceptual framework for measurement and access to reliable internet infrastructure and digital literacy. In India’s case, with a rapidly expanding digital footprint comes an equally high threat of issues such as lack of funding, unavailability of technological resources, shortage of skilled workforce, lack of alignment between digital trust and enterprise goals, inadequate governance mechanisms, the spread of misinformation through social media, etc. leading to financial fraud and data theft. 


Tech debt: the hidden cost of innovation

While tech debt may seem like an unavoidable cost for any business heavily investing in innovation, delving deeper into its causes can reveal issues that may derail operations entirely. Many organisations struggle to find a solution, as the time required for risk analysis can seem unfeasible. Yet, by recognising early signs, businesses can leverage the right tools and find the right partners to facilitate a low-risk and controlled modernisation of legacy systems. Any IT modernisation program requires a strategic, evidence-based approach, starting with a rigorous fact-finding process to identify opportunities and inefficiencies within legacy systems. ... Making a case for modernisation requires articulating the expected benefits, costs and challenges beforehand. This begins with a comprehensive analysis that identifies existing system functionality and data against business and technical requirements, highlighting any gaps or challenges. ... In extreme situations, it may be necessary to replace an entire system. This is always the last resort due to the large investment needed and the disruption it can cause. 


Fake Websites, Phishing Surface in Wake CrowdStrike Outage

These fake sites often promise quick fixes or falsely offer cryptocurrency rewards to lure visitors into accessing malicious content. George Kurtz, CEO of CrowdStrike, emphasized the importance of using official communication channels, urging customers to be wary of imposters. "Our team is fully mobilized to secure and stabilize our customers' systems," Kurtz said, noting the significant increase in phishing emails and phone calls impersonating CrowdStrike support staff. Imposters have also posed as independent researchers selling fake recovery solutions, further complicating efforts to resolve the outage. Rachel Tobac, founder of SocialProof Security, warned about social engineering threats in a series of tweets on X, formerly Twitter. "Criminals are exploiting the outage as cover to trick victims into handing over passwords and other sensitive codes," Tobac warned. She advised users to verify the identity of anyone requesting sensitive information. The surge in cybercriminal activity in the wake of the outage follows a common tactic used by cybercriminals to exploit chaotic situations.


Under-Resourced Maintainers Pose Risk to Africa's Open Source Push

To shore up security and avoid the dangers of under-resourced projects, companies have a few options, all starting with determining which OSS their developers and operations rely on. To that end, software bills of materials (SBOMs) and software composition analysis (SCA) software can help enumerate what's in the environment, and potentially help trim down the number of packages that companies need to check, verify, and manage, says Chris Hughes, chief security adviser for software supply chain security firm Endor Labs. "There's simply so much software, so many projects, so many libraries, that the idea of ... monitoring them all actively is just — it's very hard," he says. Finally, educating developers and package managers on how to produce and manage code securely is another area that can produce significant gains. The OpenSSF, for example, has created a free course LFD 121 as part of that effort. "We'll be building a course on security architectures, which will also be released later this year," OpenSSF's Arasaratnam says. "As well as a course on security for not just engineers, but engineering managers, as we believe that's a critical part of the equation."


Cross-industry standards for data provenance in AI

Knowing the source and history of datasets can help organizations better assess their reliability and suitability for training or fine-tuning AI models. This is crucial because the quality of training data directly affects the performance and accuracy of AI models. Understanding the characteristics and limitations of the training data also allows for a better assessment of model performance and potential failure modes. ... As AI regulations such as the EU AI Act evolve, data provenance becomes increasingly important for demonstrating compliance. It allows organizations to show that they use data appropriately and align with relevant laws and regulations. ... Organizations should start by reviewing the standards documentation, including the Executive Overview, use case scenarios, and technical specifications (available in GitHub). Launching a proof of concept (PoC) with a data provider is recommended to build internal confidence. Organizations lacking resources or deploying a PoC “light” may opt to use our metadata generator tool to create and access standardized metadata files


Why an Agile Culture Is Critical for Enterprise Innovation

In the end, embracing agility isn’t just about staying afloat in the turbulent waters of AI innovation; it’s about turning those waves into opportunities for growth and transformation. Because in this ever-evolving landscape, the businesses that thrive will be the ones that are flexible, responsive, and always ready to adapt to whatever comes next. Which brings me to my next point – you need to start loving failure. This requires a whole reframe because in the world of AI, getting things wrong can actually be the fastest way to get things right. Most companies are so scared of getting it wrong that they never try anything new and are frozen like a deer in headlights. In AI, that’s a death sentence. ... Be prepared for resistance. Change is scary, and you’ll always have a few “blockers” who are negative in their approach. These are the people you need to win over the most. In the meantime, you just need to weather the storm. Lastly, remember that becoming agile is a journey, not a destination. It’s about creating a mindset of continuous improvement. Always in beta? That’s absolutely fine and in the fast-paced world of AI, that’s exactly where you want to be.


The Rise of Cybersecurity Data Lakes: Shielding the Future of Data

Beyond real-time threat detection and analysis, cybersecurity data lakes offer organizations a powerful platform for vulnerability prediction and risk assessment. By examining past incidents, organizations can uncover trends and commonalities in security breaches, weak points in their defenses, and recurring threats. Cybersecurity data lakes store vast amounts of data spanning extended periods, which is a rich source of information for identifying recurring vulnerabilities or attack vectors. With techniques such as time-series analysis and pattern recognition, organizations can uncover historical vulnerability patterns through rigorous testing and use this knowledge to anticipate and mitigate future risks. In fact, this is one of the reasons why the global pentesting market is expected to rise to a value of $5 billion by 2031, with more innovative approaches like blackbox pentesting to exploit hidden attack vectors and using AI for vulnerability assessment (VAS) to improve efficiency. When combined with other vulnerability assessment methods like threat modeling and red team exercises, predictive modeling can also help organizations identify potential attack paths and attack surface areas and proactively implement defensive measures.


Internships can be a gold mine for cybersecurity hiring

Though an internship can pay off for an employer in the form of a fresh crop of talent to hire, it requires the company to invest time, planning, oversight, and resources. Designating one or more people to manage the process internally can make things easier for the organization. “Sit down with the supervisory personnel so they understand what that position is being advertised for, what the expected outcomes are and how to manage that intern, the program needs, and how they have to report [on that intern],” ... If possible, Smith recommends mentoring an intern, not simply ticking off a bureaucratic checklist of their tasks: “I do fervently believe you essentially need a sponsor, someone who’s going to take the intern under his or her wing and nurture that relationship, nurture that person.” Chiasson warns employers to manage their own expectations as carefully as they manage the interns themselves. Rather than expecting a unicorn to show up — an intern with one or more degrees, several technical certifications and other prior workplace experience — she urges companies to “take them on and then train them based on what you require.”


Desirable Data: How To Fall Back In Love With Data Quality

With so much data being pumped out at breakneck rates, it can seem like an insurmountable challenge to ensure data accuracy, completeness, and consistency. And despite technological, governance and team efforts, poor data can still endure. As such, maintaining data quality can feel like a perennial challenge. But quality data is fundamental to a company’s digital success. In order to create a business case for embracing data quality, you have to, firstly, demonstrate the far-reaching consequences of poor data quality on organisational performance. If you can present the problem from a business standpoint — backed by evidence and real-world scenarios of data quality issues leading to incurred costs, reputational risk, and uncapitalised opportunities — you can implement proactive measures and trigger a desire by top-level management to adapt processes. To bring your case to life, you then have to find ways of quantifying the business impact of data quality issues. This could take the form of illustrating the effect of bad data on a marketing campaign, showing the difference with and without data quality in relation to usable records, sales leads, and how this impacts your revenue.



Quote for the day:

"Defeat is not bitter unless you swallow it." -- Joe Clark

Daily Tech Digest - October 10, 2023

Crafting Leaders: The finishing touches

The process of narrowing the funnel for identifying future leaders must commence soon after fresh talent is inducted within the organization and certainly long before organizational knocks have bled the spirit, energy and desire-to-be-different from these young men and women. An earlier column explained how alternative fast-track schemes function and ways to choose and groom future leaders from early stages. 2 More recently, I have added two coda to the exposition. When choosing leaders for facing the uncertainties of tomorrow it is not enough to capture their capabilities at the time of selection but take into account the steepness of the slope they have traversed to reach there. 3 That is the best guarantee of future resilience and continued development in spite of handicaps. Moreover, constraints of time and shortage of the right kind of teachers prevent those running to the top of the pyramid from formally refreshing their knowledge and capabilities as frequently as they should. ... The grooming of Fast-Trackers (FTers) must vary substantially from company to company and from individual to individual.


The undeniable benefits of making cyber resiliency the new standard

"It's about practicing due care and due diligence from a cybersecurity standpoint and having a layered defense with a layered people-process-and-technology-driven program with the right governance and services and tools to enable the mission of the organization so that if there's an event, you can recover and adapt to keep business running," he adds. To do that, CISOs and their executive colleagues must have their cybersecurity basics well established -- basics such as knowing their tolerance for risk, understanding their IT environment, their security controls, their vulnerabilities, and how those all could impact the organization's operations. CISOs aren't limited to these frameworks or the assessment tools created specifically to measure cyber resiliency, says Tenreiro de Magalhaes and others. CISOs can also run tabletop drills and red-team exercises to test, measure and report on resiliency. Repeating such drills and exercises can then track whether the organization's cybersecurity program as well as specific additions to it help improve resiliency over time, experts say.


Hybrid work is in trouble. Here are 4 ways to make it work in the longer term

"We're all humans and we work with each other," he says. "To make hybrid working effective, there must be an element of interaction. There must be a connectivity, both to the business and your team." Warne says balance is essential, so find the right reasons for bringing people together in the office. "At River Island, it's about making sure that people are in for a purpose and not just presenteeism, and making sure that the people who need to work together are able to work together," he says. "If you work with a colleague, it's crucial you don't have a situation where one of you comes into the office and the other one works from home." Warne says his team doesn't have mandated days in the office. Instead, his organization's hybrid-working strategy is all about collaboration. ... However, hybrid working has allowed for an even higher level of flexibility in her organization -- and the key to success has been constant communication. Cousineau continues to listen to feedback from her team. One staff member suggested hybrid all-team meetings were creating a big divide between those who were present and those who weren't.


Evolution of stronger cyber threat actors: The flip side of Gen AI story

Deepfake technology, a subset of Generative AI, allows threat actors to create convincing video and audio forgeries. This presents a substantial threat to organisations as deepfake attacks can tarnish reputations, manipulate public opinion, and even influence financial markets. Imagine a scenario where a CEO’s voice is convincingly mimicked, disseminating false information that impacts stock prices; or consider a deepfake video of a prominent figure endorsing a product or idea they never actually supported. Such manipulations can lead to severe consequences for businesses and society at large. Generative AI is revolutionising the way malware is created. Threat actors can use AI algorithms to generate highly evasive and adaptable malware variants that can easily evade traditional signature-based antivirus solutions. These AI-generated malware strains constantly evolve, making detection and containment a significant challenge for cybersecurity professionals. Moreover, Generative AI allows for the customisation of malware based on the target environment. 


The CIO’s primary job: Developing future IT leaders

The challenge for IT management is to find people who are good at their current job but are also interested in the management side that is necessary for departmental success. In my opinion, the reason many IT departments have decided to go outside IT to bring in CIOs is because IT has not fostered the kind of environment that develops these types of professionals. IT has not traditionally tried very hard to develop strong managers from within. Most people learn to manage by watching what their managers do. And if people have bad managers, the results can be less than optimum. So how do we change that conundrum? First, we must commit our current managers and supervisors to a strong management training program. Once they have been trained in the subtleties of management, then we hopefully will begin to see new managers with skills developed from within. Effective management training can, and should be, structured around techniques that current managers use to be successful. Delegating effectively and encouraging career growth among staff are two examples.


Evolution of Data Partitioning: Traditional vs. Modern Data Lakes

In modern data lakes, data is organized into logical partitions based on specific attributes or criteria, such as day, hour, year, or region. Each partition acts as a subset of the data, making it easier to manage, query, and optimize data retrieval. Partitioning enhances both data organization and query performance. Instead of relying solely on directory-based partitioning or basic column-based partitioning, these systems provide support for complex, nested, and multi-level partitioning structures. This means that data can be partitioned using multiple attributes simultaneously, allowing for highly efficient data pruning during queries. ... Snapshots are a fundamental concept used to capture and manage different versions or states of a table at specific points in time. Snapshots are a key feature that enables Time Travel, data auditing, schema evolution, and query consistency within modern Data Lakes like Iceberg tables. Some important features of snapshots are below : Each snapshot represents a specific version of the data table. When you create a snapshot, it essentially freezes the state of the table at the moment the snapshot is taken. 


Will Quantum Computers Become the Next Cyber-Attack Platform?

A quantum cyberattack would likely be similar to today’s identity theft and data breaches. “The only difference is that the damage would be more widespread, since quantum computers could attack a broad class of encryption algorithms rather than just the particular way that a company or data center implements the algorithm, which is how attacks are currently done,” explains Eric Chitambar, associate professor of electrical and computer engineering at the Grainger College of Engineering at the University of Illinois Urbana-Champaign. Chitambar also leads the college’s Quantum Information Group. ... Conducting an enterprise-wide quantum risk assessment to help identify systems that might be most vulnerable to a quantum attack would be a good place to start, Staab says. He also recommends deploying enterprise-wide Quantum Random Number Generator (QRNG) technology to generate quantum-resistant encryption keys. This approach promises crypto agility, implementation of Quantum Key Distribution (QKD) and the development of quantum-resistant algorithms. “As we head toward a quantum computing era, adopting a zero-trust architecture will become more important than ever,” Staab states.


6 Reasons Private LLMs Are Key for Enterprises

Private LLMs can be used with sensitive data — such as hospital patient records or financial data — and then use the power of generative AI to produce groundbreaking achievements in these fields. With the LLM running on your private infrastructure and only exposed to the people who should have access to it, you can build powerful customer-focused applications, chatbots or just provide an easier way for your employees to interact with your company data — without the risk of sending the data to a third party. ... With private LLMs, you can tailor the model and response to your company, industry or customers’ needs. Such specific information is not likely to be included in general or public LLMs. You can feed your LLM with customer support cases, internal knowledge-base articles, sales data, application usage data and so much more, ensuring that the responses you receive are what you’re looking for. ... Controlling versioning or the model you’re using is extremely important because if you change the model that you use to create embeddings, you will need to re-create (or version) all the embeddings you store.


Tech Revolution: The Rise of Automation and Its Impact on Society

To offset potential adverse effects, it is imperative for companies and governments to enact policies and initiatives supporting workers susceptible to automation’s impact. This may encompass training programs designed to furnish workers with the requisite skills to excel in the evolving job market, along with social support programs to aid those grappling with employment challenges. Public policy will emerge as a pivotal determinant of technological evolution’s trajectory and consequences. Economic incentives, education reforms, and immigration policies will directly influence productivity, employment levels, and enhanced economic mobility. ... Central and state government agencies ought to collaborate with industry partners and educational institutions to craft programs that equip new workers with the skills needed to thrive in an automation-driven world. These programs bear the potential to combat emerging inequality by propelling education and training initiatives that foster success for all.


When open source cloud development doesn't play nice

Remember that the cloud provider is merely “providing” the open source software. They are not typically supporting it beyond that. For more, you’ll need to look internally or in other places. Open source users, whether in the cloud or not, often have to rely on community resources, typically provided through forums or message boards, which takes time. This can impede cloud development progress in urgent, time-sensitive scenarios or complex issues. A developer told me once that she needed to attend a meeting of the open source community before she could have a resolution to a specific problem—a meeting that was five weeks out. That won’t work. From a security standpoint, open source software can pose specific challenges. Although a community of developers regularly reviews such software, it can still harbor undetected vulnerabilities, primarily because its code is openly accessible. For instance, some open source supply chain issues arose a few years ago. These vulnerabilities can become severe security threats without stringent security measures and frequent updates. 



Quote for the day:

''Sometimes it takes a good fall to really know where you stand.'' -- Hayley Williams

Daily Tech Digest - July 31, 2023

The open source licensing war is over

Too many open source warriors think that the license is the end, rather than just a means to grant largely unfettered access to the code. They continue to fret about licensing when developers mostly care about use, just as they always have. Keep in mind that more than anything else, open source expands access to quality software without involving the purchasing or (usually) legal teams. This is very similar to what cloud did for hardware. The point was never the license. It was always about access. Back when I worked at AWS, we surveyed developers to ask what they most valued in open source leadership. You might think that contributing code to well-known open source projects would rank first, but it didn’t. Not even second or third. Instead, the No. 1 criterion developers used to judge a cloud provider’s open source leadership was that it “makes it easy to deploy my preferred open source software in the cloud.” ... One of the things we did well at AWS was to work with product teams to help them discover their self-interest in contributing to the projects upon which they were building cloud services, such as Elasticache.


Navigate Serverless Databases: A Guide to the Right Solution

One of the core features of Serverless is the pay-as-you-go pricing. Almost all Serverless databases attempt to address a common challenge: how to provision resources economically and efficiently under uncertain workloads. Prioritizing lower costs may mean consuming fewer resources. However, in the event of unexpected spikes in business demand, you may have to compromise user experience and system stability. On the other hand, more generous and secure resource provisioning leads to resource waste and higher costs. Striking a balance between these two styles requires complex and meticulous engineering management. This would divert your focus from the core business. Furthermore, the Pay-as-you-go billing model has varying implementations in different Serverless products. Most Serverless products offer granular billing based on storage capacity and read/write operations per unit. This is largely possible due to the distributed architecture that allows finer resource scaling. 


Building a Beautiful Data Lakehouse

It’s common to compensate for the respective shortcomings of existing repositories by running multiple systems, for example, a data lake, several data warehouses, and other purpose-built systems. However, this process frequently creates a few headaches. Most notably, data stored in one repository type is often excluded from analytics run on another, which is suboptimal in terms of the results. In addition, having multiple systems requires the creation of expensive and operationally burdensome processes to move data from lake to warehouse if required. To overcome the data lake’s quality issues, for example, many often use extract/transform/load (ETL) processes to copy a small subset of data from lake to warehouse for important decision support and BI applications. This dual-system architecture requires continuous engineering to ETL data between the two platforms. Each ETL step risks introducing failures or bugs that reduce data quality. Second, leading ML systems, such as TensorFlow, PyTorch, and XGBoost, don’t work well on data warehouses. 


How the best CISOs leverage people and technology to become superstars

Exemplary CISOs are also able to address other key pain points that traditionally flummox good cybersecurity programs, such as the relationships between developers and application security (AppSec) teams, or how cybersecurity is viewed by other C-suite executives and the board of directors. For AppSec relations, good CISOs realize that developer enablement helps to shift security farther to the so-called left and closer to a piece of software’s origins. Fixing flaws before applications are dropped into production environments is important, and much better than the old way of building code first and running it past the AppSec team at the last minute to avoid those annoying hotfixes and delays to delivery. But it can’t solve all of AppSec’s problems alone. Some vulnerabilities may not show up until applications get into production, so relying on shifting left in isolation to catch all vulnerabilities is impractical and costly. There also needs to be continuous testing and monitoring in the production environment, and yes, sometimes apps will need to be sent back to developers even after they have been deployed. 


TSA Updates Pipeline Cybersecurity Directive to Include Regular Testing

The revised directive, developed with input from industry stakeholders and federal partners including the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Transportation, will “continue the effort to reinforce cybersecurity preparedness and resilience for the nation’s critical pipelines”, the TSA said. Developed with input from industry stakeholders and federal partners, including the Cybersecurity and Infrastructure Security Agency (CISA) and the Department of Transportation, the reissued security directive for critical pipeline companies follows the initial directive announced in July 2021 and renewed in July 2022. The TSA said that the requirements issued in the previous years remain in place. According to the 2022 security directive update, pipeline owners and operators are required to establish and execute a TSA-approved cybersecurity implementation plan with specific cybersecurity measures, and develop and maintain a CIRP that includes measures to be taken during cybersecurity incidents. 


What is the cost of a data breach?

"One particular cost that continues to have a major impact on victim organizations is theft/loss of intellectual property," Glenn J. Nick, associate director at Guidehouse, tells CSO. "The media tend to focus on customer data during a breach, but losing intellectual property can devastate a company's growth," he says. "Stolen patents, engineering designs, trade secrets, copyrights, investment plans, and other proprietary and confidential information can lead to loss of competitive advantage, loss of revenue, and lasting and potentially irreparable economic damage to the company." It's important to note that how a company responds to and communicates a breach can have a large bearing on the reputational impact, along with the financial fallout that follows, Mellen says. "Understanding how to maintain trust with your consumers and customers is really, really critical here," she adds. "There are ways to do this, especially around building transparency and using empathy, which can make a huge difference in how your customers perceive you after a breach. If you try to sweep it under the rug or hide it, then that will truly affect their trust in you far more than the breach alone."


Meeting Demands for Improved Software Reliability

“Developers need to fix bugs, address performance regressions, build features, and get deep insights about particular service or feature level interactions in production,” he says. That means they need access to necessary data in views, graphs, and reports that make a difference to their workflows. “However, this data must be integrated and aligned with IT operators to ensure teams are working across the same data sets,” he says. Sigelman says IT operations is a crucial part of an organization’s overall reliability and quality posture. “By working with developers to connect cloud-native systems such as Kubernetes with traditional IT applications and systems of record, the entire organization can benefit from a centralized data and workflow management pane,” he says. From this point, event and change management can be combined with observability instruments, such as service level objectives, to provide not only a single view across the entire IT estate, but to demonstrate the value of reliability to the entire organization.


How will artificial intelligence impact UK consumers lives?

In the next five years, I expect we may see a rise in new credit options and alternatives, such as “Predictive Credit Cards,” where AI anticipates a consumer’s spending needs based on their past behaviour and adjusts the credit limit or offers tailored rewards accordingly. Additionally, fintechs are likely to integrate Large Language Models (LLMs) and add AI to digital and machine-learning powered services. ... Through AI, consumers may also be able to access a better overview of their finances, specifically personalised financial rewards, as they would have access to tools to review all transactions, receive recommendations on personalised spend-based rewards, and even benchmark themselves against other cardholders in similar demographics or industry standards. Consumers may also be able to ask questions and get answers at the click of a button, for example, ‘How much debt do I have compared to your available credit limits?’ or ‘What’s the best way to use my rewards points based on my recent purchases?’ improving financial literacy and potentially providing them with more spending/saving power and personalised experiences in the long run.


IT Strategy as an Enterprise Enabler

IT Strategy is a plan to create an Information Technology capability for maximizing the business value for the organization. IT capability is the Organization ability to meet business needs and improve business processes using IT based systems. The Objective of IT strategy is to spend least amount of resources and generates better ROI. It helps in setting the direction for an IT function in an organization. A successful IT strategy helps the organizations to reduce the operational bottlenecks, realize TCO and derive value from technology. ... IT Strategy definition and implementation covers the key aspects of technology management, planning, governance, service management, risk management, cost management, human resource management, hardware and software management, and vendor management. Broadly, IT Strategy has 5 phases covering Discovery, Assess, Current IT, Target IT and Roadmap. Idea of IT Strategy is to keep the annual and multiyear plan usual, insert the regular frequent check-ins along the way. Revisit IT Strategy for every quarterly or every 6 months to ensure that optimal business value created. 


AI system audits might comply with local anti-bias laws, but not federal ones

"You shouldn’t be lulled into false sense of security that your AI in employment is going to be completely compliant with federal law simply by complying with local laws. We saw this first in Illinois in 2020 when they came out with the facial recognition act in employment, which basically said if you’re going to use facial recognition technology during an interview to assess if they’re smiling or blinking, then you need to get consent. They made it more difficult to do [so] for that purpose. "You can see how fragmented the laws are, where Illinois is saying we’re going to worry about this one aspect of an application for facial recognition in an interview setting. ... "You could have been doing this since the 1960s, because all these tools are doing is scaling employment decisions. Whether the AI technology is making all the employment decisions or one of many factors in an employment decision; whether it’s simply assisting you with information about a candidate or employer that otherwise you wouldn’t have been able to ascertain without advanced machine learning looking for patterns that a human couldn’t have fast enough.



Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford