Daily Tech Digest - January 02, 2024

Decoding the Black Box of AI – Scientists Uncover Unexpected Results

“If the GNNs do what they are expected to, they need to learn the interactions between the compound and target protein and the predictions should be determined by prioritizing specific interactions,” explains Prof. Bajorath. According to the research team’s analyses, however, the six GNNs essentially failed to do so. Most GNNs only learned a few protein-drug interactions and mainly focused on the ligands. Bajorath: “To predict the binding strength of a molecule to a target protein, the models mainly ‘remembered’ chemically similar molecules that they encountered during training and their binding data, regardless of the target protein. These learned chemical similarities then essentially determined the predictions.” According to the scientists, this is largely reminiscent of the “Clever Hans effect”. This effect refers to a horse that could apparently count. How often Hans tapped his hoof was supposed to indicate the result of a calculation. As it turned out later, however, the horse was not able to calculate at all, but deduced expected results from nuances in the facial expressions and gestures of his companion.


Why 2024 is the year for IT managers to revamp their green IT plans

Conversations with enterprise operators reveal that many do not consolidate applications when they refresh their IT equipment and are hesitant to deploy power-aware workload management tools out of concern for impacting the reliability of their IT operations. IT managers must guide their organisations to intelligently utilise their available equipment capacity, using software tools to measure, manage, and maximise utilisation within reliability constraints. All organizations should set equipment utilisation goals and build multi-year efficiency project plans to improve IT infrastructure energy performance. Data available from IT equipment manufacturers indicate that workloads on two to eight old servers can be migrated to one new server when deploying n+1 (e.g., Intel or AMD CPU generation 3 to 4) or n+2 (e.g., Intel or AMD CPU generation 2 to 4) technology. Similar improvements can be achieved in storage and network equipment. Consolidating CPU workload and storage and network operations delivers a defined workload on three-quarters to one-half of the equipment. 


AI Everywhere, All the Time: Top Developments of 2023

AI-powered robots are increasingly automating tasks in manufacturing and logistics, among other industries, driving efficiency and changing the nature of work. Some major milestones include Tesla's Optimus Bot prototype demonstrating dexterous and adaptable humanoid robots, which could shape future automation solutions. Separately, Boston Dynamics' Atlas showcased its parkour skills, paving the way for applications in search and rescue or disaster response. The AlphaFold 2 AI system, developed by Alphabet subsidiary DeepMind, can perform predictions of protein structure, and stands to revolutionize drug discovery and personalized medicine, carrying the potential for helping mitigate numerous diseases. Robotic surgery systems grow ever more sophisticated, while AI-powered prosthetics offer amputees greater control and functionality. AI algorithms are already assisting doctors in medical diagnosis for diseases such as cancer, offering increased accuracy and early detection possibilities. 


How Gamification Can Help Your Business

At work, gamification is often used to build employee experience by promoting fun competition and immersive learning experiences, leading to better information retention and a heightened incentive to engage in ongoing learning and upskilling, Ringman says. Gamification is frequently used to boost staff productivity. “In any business, there are many things that need to be done every day that many of us aren’t naturally motivated to do,” Avila observes. Gamification, provides helpful context, guidance, and rewards, allowing tasks to be completed faster and more efficiently while improving focus. “This, in turn, helps the company achieve larger business goals.” Brands can also tap into gamification as they strive to engage customers and transform ordinary interactions into memorable experiences. Ringman notes that brands can use gamification to add extra fun to loyalty programs by hosting contests and competitions, as well as awarding virtual badges and trophies to customers as they complete various actions or pass significant milestones.


Envisioning a great future – India as a SuperPower

A nation’s growth is underpinned with technological advancement and how swiftly it adopts tech. During the recent state visit of India’s Prime Minister, Narendra Modiji to the United States, he put a lot of emphasis on growing technology that will revolutionize various industries. India is fast moving towards digitization. The thrust from the Government of India with the Digital India initiative and the growing use of digital technology such as Artificial Intelligence, Machine Learning and Data Analytics across various private organisations is bringing a phenomenal shift in India’s growth and development. Secondly, there is going be a lot of disruption in the way we work. With AI, lots of work will be done by BOTS, so it is important to have highly skilled labor to manage the AI which will also require upscaling the work, as we will have more leisure. The way society works will change and need to be adaptable. AI tools can be used as an Add-on tool to enable our lawyers, CAs, economists and leaders at large. Today, India is a force to be reckoned with in the domain of Information Technology without an iota of doubt.


Wi-Fi 7’s mission-critical role in enterprise, industrial networking

Wi-Fi 7 devices can use multi-link operation (MLO) in the 2.4 GHz, 5 GHz, and 6 GHz bands to increase throughput by aggregating multiple links or to quickly move critical applications to the optimal band using seamless switching between links. Fast link switching allows Wi-Fi 7 devices to avoid interference and access Wi-Fi channels without delaying critical traffic. This and other new features also make Wi-Fi 7 ideal for immersive XR/AR/VR, online gaming and other consumer applications that require high throughput, low latency, minimal jitter, and high reliability. ... Naturally there are challenges with achieving seamless connectivity between 5G & Wi-Fi. A lot of industry alignment is needed to enable frictionless movement between networks, across technologies, vendors, and areas such as authentication, QoS, QoE and security. The Wireless Broadband Alliance is playing a key role in bringing all the stakeholders (operators, enterprises and network owners) together to ensure collaboration and alignment on the frameworks that will deliver seamless connectivity.


Data Center Governance Trends to Watch in 2024

Historically, data center governance did not drive frequent conversations in the data center industry. Data center operators sometimes talked about it, but it has not tended to be a core area of concern – perhaps because, unlike other types of governance, data center governance isn't a requirement for businesses seeking to meet regulatory rules or avoid compliance fines. Looking ahead, however, governance in data centers is likely to become a more common item of discussion. Data centers have now matured to the point that businesses are increasingly keen to squeeze as much efficiency as possible out of them. In the past, disorganized data center assets or lack of optimal server room layouts may not have been critical. But today, data center operators face growing pressure to maximize the efficiency of their facilities. Certain regulators are now requiring disclosures about data center emissions, for example, meaning that increasing energy efficiency through effective governance practice has become important for protecting business's brands and reputations.


Essential skills for today’s threat analysts

Very often, for instance, there's an urgent need to communicate a new vulnerability to different audiences, which demands tailored communications for technical teams, CISOs, and board members. Williams highlights task management and patience, especially when dealing with uncertain or misleading information, and above all, coordinating between different sources of information. "So much of threat hunting today relates to that living off the land kind of thing where you're seeing things that look malicious. And so oftentimes you’re developing hypotheses and that involves consulting system admin and working toward a resolution," says Williams. ... It's also a mind game, with threat hunters needing to be highly adaptable as threats are changing daily, sometimes hourly. "You need to change with them. Never allow an inflexible mind to pervade your operational approach," says Brian Hussey, VP of threat hunting, intelligence and DFIR at SentinelOne. At the same time, you also need to see the forest through the trees. "Often threat actors introduce surface changes to their attack patterns, but core modus operandi remains unchanged, leaving important opportunities to identify and eliminate new attacks, even before they arrive," Hussey tells CSO.


Want to tackle technical debt? Sell it as business risk

There is no magic potion that can eliminate all technical debt, but technical debt can be attacked via budgeting if technical debt is not just perceived as upgrading IT infrastructure. What CIOs need to do instead is to present IT infrastructure investment as an important corporate financial and risk management issue that the business can’t afford to ignore. ... Technical budget justifications for IT infrastructure upgrades, which are seldom linked to end business strategies, make it easy for budget decision-makers to defer IT infrastructure investment. Instead, budget decision-makers figure that the company can “make do” because IT will somehow find a way to keep systems running. CIOs must change this thinking. They can start the process by changing IT infrastructure investment justifications from technical explanations to corporate financial and risk management explanations. ... CIOs should also team with the CFO to help reframe the tech debt narrative, because CFOs are always on the lookout for new corporate financial and risk management scenarios. 


Leveraging Leadership: The Fourfold Path to Business Control

Belief systems function as a mechanism for communicating the core values, objectives, and mission of the organization, thus providing guidance and motivation to staff members. By encouraging people to improve their customer service through the inculcation of positive values, conduct, performance, and a feeling of inclusion, this lever ensures the fulfillment of the organization's objectives. In the absence of a clearly-defined Belief System, employees are forced to depend on conjecture regarding the organization's intended behaviors and objectives. ... Without stifling individuals' capacity for innovation or entrepreneurship, this control mechanism permits the development of policies and standards that instruct individuals on bad behavior. Boundary systems implement regulations, codes of conduct, and premeditated strategic boundaries to delineate acceptable and abhorrent employee conduct, thereby establishing governing parameters. These boundaries clearly define the irreversible consequences of violating ethical principles and the potential outcomes that should be avoided. 



Quote for the day:

"It is better to fail in originality than to succeed in imitation." -- Herman Melville

Daily Tech Digest - January 01, 2024

4 key devsecops skills for the generative AI era

CIOs and IT leaders must prepare their teams and employees for this paradigm shift and how generative AI impacts digital transformation priorities. Nicole Helmer, VP of development and customer success learning at SAP, says training must be a priority. “Companies should prioritize training for developers, and the critical factor in increasing adaptability is to create space for developers to learn, explore, and get hands-on experience with these new AI technologies,” she says. The shift may be profound and tactical as more IT automation becomes productized, enabling IT to shift to more innovation, architecture, and security responsibilities. “In light of generative AI, devops teams should deprioritize basic scripting skills for infrastructure provisioning and configuration, low-level monitoring configurations and metrics tracking, and test automation, says Dr. Harrick Vin, chief technology officer of TCS. “Instead, they should focus more on product requirements analysis, acceptance criteria definition, software, and architectural design, all of which require critical thinking, design, strategic goal setting, and creative problem-solving skills.”


Don't neglect API functional testing

The first step to building a successful API functional testing strategy is to understand each API, its functions and its requirements. API requirements are often found within API documentation, but specific and necessary details are sometimes omitted. Work with the API developers to ensure documentation includes the expected behavior under all scenarios, error conditions and status codes, the API's purpose and objective, and how the API affects the application workflow. As a QA tester responsible for functional API testing, create a test plan and approach. Next, select an API testing tool that enables testers to create and execute both automated and manual tests. Many existing QA and developer tools include an option for API testing. Check the capabilities of your existing tools before adding another. Next, create a test plan, and develop test cases. Once the test cases are created, organize them into all working combinations. One option is to create tests and then execute them. Or, within many tools, testers can quickly test as they go. In other words, you can be testing each request as you develop the test.
Decentralization stands as one of the most profound principles championed by blockchain. While the term often evokes images of intricate algorithms and cryptographic nodes, its implications on leadership and organizational structuring are profound. At its core, decentralization heralds a departure from the age-old top-down management models. Consider the rise of decentralized finance (DeFi) platforms, which are disrupting traditional banking systems. Instead of a centralized authority making decisions, these platforms empower their users through consensus mechanisms and democratized governance. Compound, a leading DeFi platform, is a testament to this. It operates with a decentralized governance model where token holders propose, discuss, and implement changes to the platform. This not only ensures transparency, but also inculcates a deep sense of ownership among its participants. This decentralization isn't just confined to the crypto realm. Businesses are realizing the value of distributed decision-making. For instance, the Spotify model of team organization, where squads, tribes, chapters, and guilds collaborate across functions, exemplifies a shift from rigid hierarchies to fluid, decentralized structures. 


Shaping finance through technological prowess

In the present scenario, technology stands as the cornerstone of well-informed decision-making for CFOs. The integration of data analytics and artificial intelligence can equip CFOs with robust tools to dissect vast data sets, enabling them to make precise predictions and optimise resource allocation. For instance, predictive analytics has emerged as a powerful instrument that can enable CFOs to anticipate market trends and customer behaviour, thereby guiding financial strategies with unprecedented precision. Consider a scenario where a CFO of a manufacturing company leverages data analytics to optimise inventory management. By analysing historical sales data, production rates, and external market factors, the CFO can use tools to predict demand fluctuations and adjust inventory levels accordingly. This approach may not only minimise excess inventory costs but also ensure that the company is well-prepared to meet customer demands swiftly. The financial decision-making process has transitioned from a reactive stance to one driven by data-driven insights, propelling the company toward financial agility.


Soon, every employee will be both AI builder and AI consumer

The time could be ripe for a blurring of the lines between developers and end-users, a recent report out of Deloitte suggests. It makes more business sense to focus on bringing in citizen developers for ground-level programming, versus seeking superstar software engineers, the report's authors argue, or -- as they put it -- "instead of transforming from a 1x to a 10x engineer, employees outside the tech division could be going from zero to one." ... Automated platforms and generative AI -- leveraged within an open and supportive corporate culture -- may amplify many human skills, they continue. "10x engineers could become much less rare. Especially as generative AI continues to bolster developer productivity and opens up a future of increased workplace automation, many of today's hindrances may not be relevant in the next five to 10 years." It's all about fostering a superior "developer experience," not just within IT shops, but across the enterprise as well. "As technology itself continues to become more and more central to the business, technology tasks and required talent will likely become central as well. ..."


How CTOs can win over the board room

Now is the time for engineering leaders to showcase engineering’s value. There’s not a single business that hasn’t been impacted by resource tightening over the last year. While CFOs are increasingly focusing on cost optimization within their businesses, they continue to prioritize growth, according to a survey by Gartner. Engineering leaders must show how they’re driving this growth. Engineering leaders who couldn’t clearly show major business impact were the first to see cuts during 2022 recession concerns. While the rest were forced to “do more with less,” they were at least able to sustain critical projects and fight for their headcount. Why? Because they clearly communicated the importance of specific investments and projects to the business’s success. No one can argue the last year has been easy for leaders across the board. But I believe good is coming from these challenges. It has forced engineering leaders to scrutinize their investments and allowed them to identify their most critical assets, enabling them to innovate even during economic uncertainty.


Data Privacy Paradox: Balancing Innovation with Protection in the Age of AI

While AI’s potential for progress shines bright, its foundation rests upon a vast ocean of personal data – our online activity, location trails, and even social media whispers. This dependence raises a chilling specter: data surveillance. The specter of governments and corporations peering over our digital shoulders, gleaning insights into our lives, fuels fears of mass surveillance and the potential misuse of this sensitive information. This specter chills not only with its invasive nature, but also with its chilling implications for individual freedoms and potential abuses of power. But the concerns go beyond the watchful eye of Big Brother. AI’s algorithms, trained on vast datasets, can become unwitting vessels of algorithmic bias. Imagine a credit scoring system fueled by biased data, unfairly disadvantaged certain demographics. Or a criminal justice system where AI-powered predictions exacerbate existing prejudices. These are not dystopian nightmares; they are real possibilities if we fail to address the inherent biases that can creep into the heart of AI. Furthermore, the inner workings of these algorithms often remain shrouded in a veil of secrecy.


Why You Are So Resistant to Change — And How to Overcome It

As an entrepreneur, your ability to change and adapt is arguably the single most important contributor to long-term success. Stagnant businesses simply can't flourish, grow or (like those heart patients unwilling to modify their habits) survive. Ask yourself, how receptive are you to transformation in yourself, your processes, and your entire organization? Now is the time to evolve as a business owner. Start with an unwavering desire for continuous improvement. The next step is finding that emotional connection and the people or groups who can support you on your journey of change. For business leaders, these relationships are often found outside of one's own company in the form of peer advisory boards or mastermind groups. Peer advisory boards provide business owners with the requisite support and emotional connection that act as catalysts for forward progress and even innovation. As the president and CEO of such an organization, I get to witness the transformative power of connection all the time. It is truly amazing to see what can happen between owners and executives who care about each other's welfare and respect, support and elevate each other on their paths to transformation.


Open Source in 2024: More Volatility, More Risk, More AI

But there’s plenty in the way of increased international cooperation around tech – or indeed, international cooperation around anything. To paraphrase a former British prime minister, the greatest challenge for a leader is, “Events, dear boy. Events” If the last three years have been event-packed, 2024 will be equally so, not least because of an unprecedented number of elections due, including the U.S. presidential race. These elections become cybersecurity incidents themselves. But they could also herald and shape further regulation and legislation that could directly affect the open source world Both the U.S. and E.U. have been putting in place legislation and regulation around AI, but it is 2024 that will see how these efforts start playing out in the real (virtual) world. The European Union’s Cyber Resiliency Act will also come into effect in 2024. Recently announced revisions have reportedly made it less overtly problematic for open source, but the final text is yet to be released. At the same time, the U.S. has already been turning the technology screws on China and Russia, choking off exports of GPUs to the former, for example, and enforcing wide-ranging sanctions on the latter.


Infrastructure, Operations Leaders Must Focus on DevOps, SRE Initiatives

Rajesh Ganesan, president of IT ManageEngine, notes data breaches and data privacy law violations can do irreparable damage to an organization's reputation. “By making privacy and data governance a top priority in 2024, I&O leaders can ensure their organizations are compliant with privacy laws and protected against data breaches,” he explained in an email interview. “It's crucial that every employee in the organization takes personal responsibility for data privacy.” Ganesan points out if organizations have the financial means, it is wise to invest in private data centers. “Organizations that invest in their own domain controller and security operations can control their security posture and make sure poor levels of security from the public service provider does not affect them,” he says. Not only are these companies protected from any breaches that occur in a public cloud environment, but they also have an easier time complying with legislation, as specific control measures can be put in place.



Quote for the day:

"Don't judge each day by the harvest you reap but by the seeds that you plant." -- Robert Louis Stevenson

Daily Tech Digest - December 29, 2023

5 Ways That AI Is Set To Transform Cybersecurity

Cybersecurity has long been notoriously siloed, with organizations installing many different tools and products, often poorly interconnected. No matter how hard vendors and organizations work to integrate tools, coalescing all relevant cybersecurity information into one place remains a big challenge. But AI offers a way to combine multiple data sets from many disparate sources and provide a truly unified view of an organization’s security posture, with actionable insights. And with generative AI, gaining those insights is so easy, a matter of simply asking the system questions such as “What are the top three things I could do today to reduce risk?” or “What would be the best way to respond to this incident report?” AI has the potential to consolidate security feeds in a way the industry has never been able to quite figure out. Generative AI will blow up the very nature of data infrastructure. Think about it: All the different tools that organizations use to store and manage data are built for humans. Essentially, they’re designed to segment information and put it in various electronic boxes for people to retrieve later. It’s a model based on how the human mind works.


Microservices Resilient Testing Framework

Resilience in microservices refers to the system's ability to handle and recover from failures, continue operating under adverse conditions, and maintain functionality despite challenges like network latency, high traffic, or the failure of individual service components. Microservices architectures are distributed by nature, often involving multiple, loosely coupled services that communicate over a network. This distribution often increases the system's exposure to potential points of failure, making resilience a critical factor. A resilient microservices system can gracefully handle partial failures, prevent them from cascading through the system, and ensure overall system stability and reliability. For resilience, it is important to think in terms of positive and negative testing scenarios. The right combination of positive and negative testing plays a crucial role in achieving this resilience, allowing teams to anticipate and prepare for a range of scenarios and maintaining a robust, stable, and trustworthy system. For this reason, the rest of the article will be focusing on negative and positive scenarios for all our testing activities.


Skynet Ahoy? What to Expect for Next-Gen AI Security Risks

From a cyberattack perspective, threat actors already have found myriad ways to weaponize ChatGPT and other AI systems. One way has been to use the models to create sophisticated business email compromise (BEC) and other phishing attacks, which require the creation of socially engineered, personalized messages designed for success. "With malware, ChatGPT enables cybercriminals to make infinite code variations to stay one step ahead of the malware detection engines," Harr says. AI hallucinations also pose a significant security threat and allow malicious actors to arm LLM-based technology like ChatGPT in a unique way. An AI hallucination is a plausible response by the AI that's insufficient, biased, or flat-out not true. "Fictional or other unwanted responses can steer organizations into faulty decision-making, processes, and misleading communications," warns Avivah Litan, a Gartner vice president. Threat actors also can use these hallucinations to poison LLMs and "generate specific misinformation in response to a question," observes Michael Rinehart, vice president of AI at data security provider Securiti. 


Cybersecurity teams need new skills even as they struggle to manage legacy systems

To stay ahead, though, security leaders should incorporate prompt engineering training for their team, so they can better understand how generative AI prompts function, the analyst said. She also underscored the need for penetration testers and red teams to include prompt-driven engagements in their assessment of solutions powered by generative AI and large language models. They need to develop offensive AI security skills to ensure models are not tainted or stolen by cybercriminals seeking intellectual property. They also have to ensure sensitive data used to train these models are not exposed or leaked, she said. In addition to the ability to write more convincing phishing email, generative AI tools can be manipulated to write malware despite limitations put in place to prevent this, noted Jeremy Pizzala, EY's Asia-Pacific cybersecurity consulting leader. He noted that researchers, including himself, have been able to circumvent ethical restrictions that guide platforms such as ChatGPT and prompt them to write malware.


The relationship between cloud FinOps and security

Established FinOps and cybersecurity teams should annually evaluate their working relationship as part of continuous improvement. This collaboration helps ensure that, as practices and tools evolve, the correct FinOps data is available to cybersecurity teams as part of their monitoring, incident response and post-incident forensics. The FinOps Foundation doesn't mention cybersecurity in its FinOps Maturity Model. But, in all rights, FinOps and cybersecurity collaboration indicates a maturing organization in the model's Run phase. Ideally, moves to establish such collaboration should show themselves in the Walk stage. ... Building a relationship between the FinOps and cybersecurity teams should start early when an organization chooses a FinOps tool. A FinOps team can better forecast expenses, plan budget allocation and avoid unnecessary costs by understanding security requirements and constraints. These forecasts result in a more cost-effective and financially efficient cloud operation, so plan for some level of cross-training between the teams.


What is GRC? The rising importance of governance, risk, and compliance

Like other parts of enterprise operations, GRC comprises a mix of people, process, and technology. To implement an effective GRC program, enterprise leaders must first understand their business, its mission, and its objectives, according to Ameet Jugnauth, the ISACA London Chapter board vice president and a member of the ISACA Emerging Trends Working Group. Executives then must identify the legal and regulatory requirements the organization must meet and establish the organization’s risk profile based on the environment in which it operates, he says. “Understand the business, your business environment (internal and external), your risk appetite, and what the government wants you to achieve. That all sets your GRC,” he adds. The roles that lead these activities vary from one organization to the next. Midsize to large organizations typically have C-level executives — namely a chief governance officer, chief risk officer, and chief compliance officer — to oversee these tasks, McKee says. These executive lead risk or compliance departments with dedicated teams.


Revolutionising Fraud Detection: The Role of AI in Safeguarding Financial Systems

Conventional fraud detection methods, primarily rule-based systems, and human analysis, have proven increasingly inadequate in the face of evolving fraud tactics. Rule-based systems, while effective in identifying simple patterns, often struggle to adapt to the ever-changing landscape of fraud. Fraudsters have stronger motivation and they evolve faster than the rules in the rules engine. ... The same volumes of data that are overwhelming for traditional fraud detection systems are fuel for AI. With its ability to learn from vast amounts of data and identify complex patterns, AI is poised to revolutionize the fight against fraud. ... While AI offers immense potential, it’s crucial to acknowledge the challenges associated with its adoption. Data privacy concerns, ethical considerations around algorithmic bias, and the need for robust security measures are all critical aspects that demand careful attention. As AI opens new frontiers in fraud prevention, unregulated AI technology such as deepfake in the wrong hands could also enable sophisticated impersonation scams. However, the benefits of AI far outweigh the challenges. 


API security in 2024: Predictions and trends

The rapid rate of change of APIs means organizations will always have vulnerabilities that need to be remediated. As a result, 2024 will usher in a new era where visibility will be a priority for API security strategies. Preventing attackers from entering the perimeter is not a 100% foolproof strategy. Whereas having real-time visibility into a security environment will enable rapid responses from security teams that neutralize threats before they impact operations or extract valuable data. ... With the widespread use of APIs, especially in sectors such as financial services, regulators are looking to encourage transparency in APIs. This means data privacy concerns and regulations will continue to impact API use in 2024. In response, organizations are becoming weary of having third parties hold and access their data to conduct security analyses. We expect to see a shift in 2024 where organizations will demand running security solutions locally within their own environments. Self-managed solutions (either on-premise or private cloud), eliminate the need to filter, redact, and anonymize data before it’s stored.


The Terrapin Attack: A New Threat to SSH Integrity

Microsoft’s logic is that the impact on Win32-OpenSSH is limited This is a major mistake. Microsoft’s decision allows unknown server-side implementation bugs to remain exploitable in a Terrapin-like attack, even if the server got patched to support “strict kex.” As one Windows user noted, “This puts Microsoft customers at risk of avoidable Terrapin-style attacks targeting implementation flaws of the server.” Exactly so. You see, for this protection to be effective, both client and server must be patched. If one or the other is vulnerable, the entire connection can still be attacked. So to be safe, you must patch and update both your client and server SSH software. So, if you’re Windows and you haven’t manually updated your workstations, their connections are open to attack. While patches and updates are being released, the widespread nature of this vulnerability means that it will take time for all clients and servers to be updated. Because you must already have an MITM attacker in place to be vulnerable, I wouldn’t go spend the holiday season worrying myself sick. I mean, you’re sure you don’t already have a hacker inside your system, right? Right!?


Supporting Privacy, Security and Digital Trust Through Effective Enterprise Data Management Programs

Those professionals responsible for supporting privacy efforts should therefore prioritize effective enterprise data management because it is integral to safeguarding individual’s privacy. A well-structured data management framework works to ensure that personal information is handled ethically and compliant with regulations, while fostering a culture of responsible data stewardship within organizations. When done right, this reinforces trust with stakeholders, serves as a differentiator in the marketplace, improves visibility into data ecosystems, expands reliability of data, and optimizes scalability and innovative go to market efforts. ... Most, if not all, of the global data privacy laws and regulations require data to be managed effectively. To comply with these laws and regulations, organizations must first understand the data they collect, the purposes for its collection, how it is used, how it is shared, how it is stored, how it is destroyed, and so on. Only after organizations have a full understanding of their data ecosystem can they begin to implement effective controls to both protect data and preserve the ability of the data to achieve intended operational goals.



Quote for the day:

"Too many of us are not living our dreams because we are living our fears." -- Les Brown

Daily Tech Digest - December 28, 2023

CISO: Top 10 Trends for 2024

Mike highlighted recent legal cases involving CISOs, expressing concern about the unprecedented accountability of security professionals and the potential for them to be scapegoated. He discussed cases like Joe Sullivan at Uber and Tim Brown at SolarWinds, emphasizing the SEC's issuance of a Wells Notice for a CISO, a first in history. Mike questioned the trend of holding CISOs responsible for issues beyond their control and predicted a continued exodus of CISOs from their roles due to perceived lack of support. Yogesh offered a contrasting view, suggesting that recent cases may serve as catalysts for elevating the role of CISOs and improving security programs. ... Nitin addressed the widespread reliance on third parties in today's technological landscape and the need for continuous due diligence beyond initial assessments. Nitin emphasized the importance of close coordination and regular conversations with key third-party providers, highlighting the significance of vendor management skills and understanding the scope of responsibilities. Yogesh brought up the concept of shared responsibility models inspired by the practices of AWS and Amazon, emphasizing the need for a prioritized and evolving approach to third-party risk management.


Why People Should Be at the Heart of Operational Resilience

Embracing the ethos of “you build it, you run it” isn’t necessarily a bad thing, but turning it into a fetish can easily lead us into a place where failures and faults become the responsibility of individuals. That’s not good for anyone, humans or technology. “If the resilience of a system depends on humans never making mistakes, then the system is really brittle,” Shortridge said. “Humanity’s success is because of our creativity and ability to adapt; it isn’t because we’re great at doing the same thing the same way every time, or can memorize 50 things on a checklist that we never forget.” Although DevOps is well-intentioned in attempting to break down barriers, it has arguably contributed to a broader organizational discomfort with failure — a desire to control and minimize risk. “Many organizations struggle with the existential angst of wanting to prevent anything bad from ever happening,” Shortridge claimed. This she added, is ultimately “an impossible goal … It’s a downward spiral where the fear of things going wrong results in a slower, heavier approach, which actually increases the likelihood of things going wrong – as well as hindering the ability to swiftly recover from failure.”
Managing vendor partners is not a “one-and-done” activity. That’s why technology is so crucial to keep this process from being a herculean effort and make continuous monitoring more realistic throughout every stage of the vendor lifecycle. As an example, consider the initial assessment stage, when companies invite vendors to bid or pitch their services. Security questionnaires should be required at this point, especially for prospects that would be gaining full access to systems. These questionnaires can be automated to start, while still allowing respondents to supplement responses or resources. It's also a good idea to require a security audit report to illuminate any gaps that would need to be addressed before a contract gets signed. Regardless of the size or influence of vendor prospects, companies should always do their due diligence when it comes to assessing risks to avoid easily preventable attacks. Companies should provide a contract to approved vendors that clearly outlines compliance expectations — including a timeline of how long they have to fix any issues identified in the earlier security audit. 


Getting the most from cloud parking

Cloud parking, a component of FinOps, is the practice of shutting down cloud resources when your business is not using them. For example, if you have a cloud server instance running on a service like EC2, turning the server off when it's not hosting an active workload is an example of cloud parking. Later, if you want to use the server again, you'd "unpark" it by starting the instance back up. Cloud parking is important because almost all cloud services charge, at least in part, based on total running time. By parking cloud resources that you're not actively using, you stop the pricing meter and avoid paying for resources you don't actually need. ... Most types of cloud data resources, such as databases or storage volumes, can't be shut off in the same way that compute resources can, so businesses end up paying for their data even after applications that interact with the data are no longer running. With a sophisticated toolset that allows you to convert between data storage types quickly, it's possible to minimize this cost. For instance, imagine you shut down an EC2 instance and want to stop paying for the EBS volume that the instance uses.


The secret to making data analytics as transformative as generative AI

Unstructured and ungoverned data lakes, often built around the Hadoop ecosystem, have become the alternative to traditional data warehouses. They’re flexible and can store large amounts of semi-structured and unstructured data, but they require an extraordinary amount of preparation before the model ever runs. ... “The power of GPUs allows them to analyze as much data as they want,” Leff says. “I feel like we’re so conditioned — we know our system cannot handle unlimited data. I can’t just take a billion rows if I want and look at a thousand columns. I know I have to limit it. I have to sample it and summarize it. I have to do all sorts of things to get it to a size that’s workable. You completely unlock that because of GPUs.” RAPIDS, Nvidia’s open-source suite of GPU-accelerated data science and AI libraries also accelerates performance by orders of magnitude at scale across data pipelines by taking the massive parallelism that’s now possible and allowing organizations to apply it toward accelerating the Python and SQL data science ecosystems, adding enormous power underneath familiar interfaces.


3 Strategies For Turning Uncertainty Into a Clear Path Forward

To stand up to uncertainty, you must start reframing it as an opportunity. For leaders, rapidly multiplying unknowns increases the pressure to rebuild and reimagine their businesses. Though the journey from uncertainty to clarity is formidable, addressing challenges with a lens of opportunity leads to more and better innovation. ... A focus on simplicity can help. Simplicity is about focusing on the right things rather than doing things right. It's about focusing on the fundamentals, such as customer needs, and simple but powerful questions, such as "what do they need?" that help you get to the core of a problem and ensure you're solving the right one. "Keep it simple" means focusing on the strongest growth opportunities and having the courage to get rid of efforts that don't move the needle. ... For many who have "grown up" in large, resource-rich corporate environments, there is an instinct to default to resources (e.g. budget, headcount) to solve problems. However, my research over 15 years has shown that constraints can help navigate uncertainty. How? By activating creativity and ingenuity and relying on existing resources rather than waiting for additional resources to get started.


AI Investments We Shouldn't Overlook

AI is not a product -- it’s an ever-growing cycle of data usage, and people can be a huge factor in its failure. This leads us back to trust, as most people don’t trust the technology or the leaders working to regulate it. According to Pew Research, 52% of Americans say they feel more concerned than excited about the increased use of artificial intelligence. Those concerns are particularly strong in communities historically underrepresented in the design and deployment of technology. Meaningful participation including communities of users, impacted, as well as creators will improve ethical inquiry, help reduce harmful biases, and build confidence in AI’s fairness. To help allay these concerns, we need to have “seats at the table” for people with broader domain expertise. This is especially vital in areas such as health, finance, and law enforcement where bias has existed historically and is still a serious concern. Additionally, we should consider funding the National Science Foundation’s National AI Research Resource Task Force, and similar efforts, to reduce the economic barriers of entry into AI professions.


How to turn shadow IT into a culture of grassroots innovation

Balancing innovation with IT control remains necessary. Cybersecurity, including privacy and data protection, is considered the top business risk by corporate leaders. Your organization’s risk tolerance will depend on its culture, customers, and industry. Many aspects of security will be non-negotiable, but many can be solved by listening to users and evolving how you use platforms and services. One of the main risks associated with shadow IT is being blind to where company data lives. Without control, you can’t apply consistent policies. Let teams know why security processes are necessary and which standards any platform or tool must meet. Work to understand the business purpose of the adoption so you can help them find an alternative if their initial choice doesn’t meet those standards. The goal is to help users make intelligent security decisions – or help them behave securely by default – while enabling them to take advantage of technology that enhances their work. For example, by adopting a single sign-on solution with multi-factor authentication, you can solve access issues and give people a wider choice of apps and services while maintaining centralized visibility.


Unstructured Data Management Predictions for 2024

Data is increasingly in motion as IT needs to leverage new storage technologies and satisfy new business requirements. Enterprise data migrations of unstructured file and object data have long been complex and too manual and often require professional services. Automation and AI tools will change this, enabling intelligent, efficient data migrations that no longer need IT managers to babysit them and they will also be adaptive. Modern tools will know how to solve problems on the fly and self-remediate and will be able to recommend optimal storage tiers for different unstructured data workloads and use cases. This is a timely development, as data migrations are becoming more varied all the time and dependent upon the customer's changing environment — from firewall to network connections to security configurations. ... Unstructured data management will deliver affordable resiliency at a fraction of the cost, by creating cheap copies in durable object storage in the cloud for non-critical data — which is the bulk of all data in storage. This "poor man's data resiliency" approach will complement the 3x backup method for mission-critical data to create a cost-effective and holistic disaster recovery strategy.


ChatGPT can cough up sensitive information, raises privacy concerns

While catastrophic forgetting is supposed to bury old information as new data is added, researchers from Indiana University (IU) Bloomington have found that memories of these large language models (LLMs) can be jogged, posing privacy risks. According to a New York Times report, graphics editor Jeremy White was informed that his email address was procured via ChatGPT by an IU Ph.D. candidate, Rui Zhu. Zhu and his team were able to obtain White's and those of over 30 NYT employees from GPT-3.5 Turbo, an LLM from OpenAI. ... Speaking to the Daily Mail, AI expert Mike Wooldridge warned that confiding in ChatGPT about personal matters or opinions, such as work grievances or political preferences, could have consequences, reported The Guardian. Sharing private information with the chatbot may be "extremely unwise" as the revealed data contributes to training future versions. Wooldridge emphasizes that users should not expect a balanced response, as the technology tends to "tell you what you want to hear." He also dismissed the idea that AI possesses empathy or sympathy and cautions users that anything shared with ChatGPT may be used in future versions, making retractions nearly impossible.



Quote for the day:

"Knowledge is being aware of what you can do. Wisdom is knowing when not to do it." -- Anonymous

Daily Tech Digest - December 27, 2023

Artificial ethics: Programmed principles or cultivated conviction?

Are AI systems developing generalisable ethical principles? Evidence suggests limited abilities to contextually apply concepts like privacy rights and informed consent. Or is ethical behavior just pattern recognition of scenarios labeled “unacceptable” by training data? Risk of overreliance on surface-level input/output mapping without philosophical grounding. Compare this rules-based approach to the human internalisation of ethical frameworks tied to justice, rights, duties, and harms. ... Their reasoning happens within limited data slices. This opacity around applied judgment represents a major trust gap. We cannot investigate when AI should make independent decisions in ethically ambiguous areas versus defer to human oversight due to understandable limitations in their moral literacy. Bridging this chasm requires architecting comprehensive ethical infrastructure across data sourcing, model design, and product applications. Ethics must permeate the entirety of systems, not follow as an afterthought. Careful scrutiny into reasoning behind AI choices can uncover areas for instilling principled priorities over transitory rules.


‘Merchants of Complexity’: Why 37Signals Abandoned the Cloud

With this ease of cloud computing comes a certain loss of independence. When a cloud provider suffers a massive outage, the customers are helpless to do anything for their own users. Hightower and DHH recalled a series of outages on the Google Cloud Platform that was so bad, it spurred 37Signals to move everything over to AWS. “The sense of desperation you feel when everything is out of your control, and there’s literally nothing we can do in the moment to fix it, is just so disheartening,” DHH said. And moving a workload, and its associated data, from one cloud to another is far from a trivial, or inexpensive task. DHH noted that it cost 37signals “hundreds of thousands of dollars” to move 6 to 7 petabytes of data from GCP, due to egress costs. “This whole idea that the cloud is going to give you mobility was not really true,” DHH said. ... DHH related how you can see $600,000 of Dell servers, out there on a loading dock somewhere. Whereas with the cloud, you are never sure where the money goes. You can click a button to spin up an authorization service, forget about it and let it run up thousands of dollars in monthly charges on the corporate account.


If you don’t already have a generative AI security policy, there’s no time to lose

Over time, security teams have tried to reign in shadow IT with policies that mitigate the plethora of risks and challenges it has introduced, but many remain due to its scale. Figures from research firm Gartner revealed that 41% of employees acquired, modified, or created technology outside of IT's visibility in 2022, while 2023 shadow IT and project management survey from Capterra found that 57% of small and midsized businesses have had high-impact shadow IT efforts occurring outside the purview of their IT departments. Although generative AI is quite a different thing, it's taking off far quicker than shadow IT did. The lesson is that security-focused policies should be put in place in the early stages as new technology use grows and not after it reaches an unmanageable scale. Adding to the pressures are the potential security risks generative AI can insert into businesses if unmanaged, which are very much still being understood. ... The problem is that most organizations, regardless of size or industry, are experiencing the same challenge around how to control and manage the secure use of generative AI, Thacker says. 


The Silver Bullet Myth: Debunking One-Size-Fits-All Solutions in Data Governance

Customized Data Governance frameworks streamline Data Management processes, allowing them to better align with specific organizational workflows. This alignment drives an increase in the overall efficiency of operations and reduces redundancies, saving both time and resources. The result – minimizing errors, making the ship run more smoothly, and cost savings – is a complete win-win scenario. Effective Data Governance is also an instrumental factor for managing risks such as breaches and misuse. Customized frameworks provide organizations with enough space to put together robust mechanisms for identifying, assessing, mitigating, and ultimately dealing with risks in a way that is tailored to the specific risk landscape in question. Another thing the proponents of the silver bullet approach disregard is the need for solutions for protecting rapidly moving data, as with same day ACH transfers, messaging apps, and real-time video call apps such as Zoom and Google Meet. As organizations evolve, so, too, do their Data Governance needs. Customized frameworks are scalable and adaptable, accommodating changes as the organization grows, enters new markets, or adopts new technologies.  


CIOs Battle Growing IT Costs with Tools, Leadership

CIOs can optimize IT spend by implementing more rigorous, strategy-aligned software approval processes aimed at avoiding duplicative spend and ensuring contracts are rightsized for the business needs. “The challenge and responsibility of CIOs is to be intentional with every dollar and investment by keeping the organization focused on the most important priorities instead of pursuing every exciting new idea,” she says. Mandell says IT leaders should encourage a culture of innovation and ideation, but they must also balance maintaining a strategic focus -- and communicate these goals across their own team and other areas of the organization. ... “Bridging the finance and engineering functions is hard work and you need both a team and a platform to effectively accomplish this,” she says. “When you do, you will be able to ensure and show that your cloud costs are being effectively managed.” ... When integrated effectively, AI-powered solutions can enhance decision-making and identify opportunities for optimization. “With the help of emerging technologies, CIOs and other budget decision makers will have greater visibility into spend, helping ensure resources are allocated strategically and IT environments are streamlined,” Mandell explains.


CIOs in financial services embrace gen AI — but with caution

AI is not the future of financial services — it’s the present. Genpact, a major business and technology services company that assists banks such as JP Morgan and Goldman Sachs, is already utilizing AI. “It’s really good at summarising, filling in blanks, and connecting dots, so generative AI is fit for purpose,” says Brian Baral, global head of risk at Genpact. “We’ve been able to leapfrog and do in months what had taken three years, but the data is key. Banks have to get ready to take the step forward.” Conscious of the recent history of disruption to financial services, the sector’s technology leaders are already looking for opportunities in AI. “Generative AI is starting off a new age of exploration in IT,” says Frank Schmidt, CTO at insurance firm Gen Re. Cugini at KeyBank agrees, and adds that the exploration has to include a cross-functional team from all areas of the business, not just IT. “We also pulled in some experts from Microsoft and Google to really understand what AI means to our sector.” Schmidt sees AI as having potential in process automation, particularly underwriting submissions. “AI will play a role in this workflow and classifying information,” he says.


NASA Releases First Space Cybersecurity Best Practices Guide

The guidance urges public and private sector organizations conducting space activities to establish a continuous process of mission security risk analysis and risk response in order to routinely identify and address security risks related to specific operations. NASA also advises organizations to apply the principles of domain separation and least privilege designs across their enterprises to better mitigate supply chain attacks and other operational vulnerabilities. Misty Finical, deputy principal adviser for enterprise protection at NASA, said the guidance "represents a collective effort to establish a set of principles that will enable us to identify and mitigate risks and ensure continued success of our missions, both in Earth's orbit and beyond." Reports detail a variety of challenges that organizations have faced in recent years while responding to emerging cybersecurity threats in space. A 2019 Government Accountability Office assessment found that the Department of Defense had struggled to adopt new approaches to protect U.S. satellites from cyberattacks by foreign adversaries and from the increasing threat of space debris.


How to incorporate human-centric security

The concept of human-centric security focuses on better management of the insiders that either inadvertently or maliciously cause so many of the threats that companies must deal with. Gartner recommends reducing friction caused by security strategies and starting to manage security risk. A human-centric approach to security not only takes the burden of security off the employee, it starts to look at the overall risk associated with certain behaviors and on improving the experience of employees. One way to look at this is as a trade-off. Allowing people to work remotely, for example, carries a certain security risk that needs to be weighed against the benefits of giving employees flexibility. However, another important way to look at risk is to analyze the behaviors that are most likely to lead to future threats and determine new ways to mitigate those risks to reduce future threats. By using insider risk management software, companies can better understand new work patterns of remote employees, track negative sentiment and flag access to sensitive data to proactively improve the company’s overall cybersecurity and employee experience.


AI: A Data Privacy Ally?

We can expect to see new technologies created to address the security and data privacy concerns in an AI world. Imagine consumers getting their own “AI Consent Assistant.” Such a tool would move us from static, one-time consent checkboxes to dynamic, ongoing conversations between consumers and platforms, with the AI Consent Assistant acting as a personal guardian to negotiate on our behalf. Or maybe AI tools could be developed to help security teams predict privacy breaches before they happen or proactively auto-redact sensitive information in real-time. We must think differently about AI in relation to data privacy – the future of data is not about how much we collect, but how ethically it is used and how we can realistically safeguard it so that we get the best out of AI without violating data privacy tenets. ... Transparency should never be a question– no one has to guess at what data is collected, why, how it is stored, or how to remove it. Before launching any new technology or platform, companies should assess the privacy impact, working to identify potential privacy issues and taking preventive measures from the start, as it remains quite difficult to retrofit privacy.


Security And Market Adoption Of Open Banking

With regard to the first element that ensures security, the European Banking Authority drafted regulatory technical standards for strong customer authentication in 2016. As specified by PSD2, strong authentication must rely on at least two key elements that are independent of one another. This is to ensure the disclosure or theft of one authentication element does not affect the overall security. ... As for the second element of security mitigation, the communication channel between third-party providers and banks, PSD2 paved the way for regulated application programming interfaces. The interface must allow third-party providers to identify themselves with banks when requesting access to accounts. This outcome establishes requirements and responsibilities that prevent third-party providers from using expired certificates, or not having them at all, when fetching data or transmitting a payment order. ... Building trust in open banking is an essential step toward achieving widespread adoption as well. Companies can share real-life examples, such as case studies and testimonials. These are powerful ways to showcase the benefits of open banking and building trust with customers. 



Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - December 26, 2023

Generative AI is forcing enterprises and policymakers to rewrite the rules of cybersecurity

In a sense, creativity is the new hacker’s currency; it’s used to craft and execute attacks that traditional cybersecurity measures fail to detect and prevent. With 72 percent of white hat hackers believing they’re more creative than AI, it’s safe to assume that bad actors with similar skill sets only need a few creative muscles to cause material problems at scale. From persistent nagging to creative wordplay, hackers can trick an AI model to perform unintended functions and reveal information otherwise meant to be guarded. These prompts don’t need to be complex, and bad actors are constantly exploring new methods to get generative AI models to spill their secrets. The threat landscape for companies innovating with AI just got a lot more complex. So what should we do about it? Just like there are various ways to express a message in English, the same goes for LLM hacks. There are countless different ways to get an AI model to produce toxic or racist content, expose credit card information, or espouse misinformation. The only way to effectively protect AI apps from this volume of attack vectors is with data. A lot of it. Safeguarding against AI threats requires extensive knowledge of what those threats are. 


Rejection Doesn't Have to Be a Bad Thing. Here's How You Can Use It as a Tool for Success.

Pain is inevitable, but suffering is optional. Recognize that you have a choice in how you feel about rejection. Whatever story you tell yourself about rejection comes from you. It's up to you to interpret the information that exists in your world. You have the power to flip the script, change the narrative and tell yourself a different story. You can choose to view rejection as a good thing — it means you put yourself out there, asked a tough question and exuded courage. It means you got out of your comfort zone, which always helps us grow and evolve. It means you got to practice a skill (the skill of asking, influencing or selling). That practice will help you grow thicker skin and hone your craft, making you stronger and tougher. With that in mind, you can choose to view rejection as a good thing. ... Once you've been rejected and know why, you can adjust your strategy. You might learn that making calls at lunch time isn't effective because no one answers the phone. You might learn you've been targeting the wrong demographic and need to pick different prospects. You might learn prospecting on the weekdays isn't as effective as prospecting on weekends. 


You should be worried about cloud squatting

The core issue is that cloud asset deletions often occur without removing associated records, which can create security risks for subdomains. Failure to also delete records allows attackers to exploit subdomains by creating unauthorized phishing or malware sites. This is called cloud squatting. Resources are provisioned and deallocated programmatically, typically. Allocating assets such as virtual servers and storage space is quick, generally done in seconds, but deallocation is more complex, and that’s where the screwups occur. ... To mitigate this risk, the security teams design internal tools to comb through company domains and identify subdomains pointing to cloud provider IP ranges. These tools check the validity of IP records assigned to the company’s assets. These are assigned automatically by cloud providers. I always get nervous when companies create and deploy their own security tools, considering that they may create a vulnerability. Mitigating cloud squatting is not just about creating new tools. Organizations can also use reserved IP addresses. This means transferring their owned IP addresses to the cloud, then maintaining and deleting stale records, and using DNS names systemically.


Great business partners drive great business performance

A core part of the finance team is also risk management and the ability to say “No”. Too often finance teams say “No”. My boss says that a CFO is many times a CF-No. There are two aspects here. To use football parlance, a CFO has not just to keep score but to score goals. This means that finance teams have to enable risk-taking. No risk, no gain. Capital allocation and building resilience to take measured risks is a critical function of the CFO. In a VUCA world, understanding the risks is a critical imperative. The Covid pandemic and the Ukraine war have resulted in significant supply chain risks. The rapid pace of digitisation, AI, and other developments threaten business models. Making sense of the developments and allowing strategic choices to develop, capital allocation to be done and monies invested is now engaging CFOs significantly. The CFO, at times, has to be a CF-No. Exercising this has to be done very carefully. Done too often, the finance teams become a blocker. But if the teams have done enough to provide insights, engage well with the business, and develop trust – saying a “No” is accepted as sage advice by the business. Getting this right is an art. But this is something that finance teams now have to constantly work on.


How the new Instegogram threat creates liability for organizations

Under Section 230 of the Communications Decency Act (CDA), companies that offer web-hosting services are typically shielded from liability for most content that customers or malicious users place on the websites they host. However, such protection may cease if the website controls the information content. A company that uses a social media network to create the picture or develop information would arguably control that information and thus may not be immune. That is, if a service provider is "responsible, in whole or in part, for the creation or development of the offending content," its actions could fall outside the CDA's protections. Whether the CDA protections extend to damage caused by malware is still largely an open question of law. Companies could therefore be liable for third-party damage resulting from an Instegogram attack, even if they did not know the digital image was infected. As no statutory immunities exist to shield social media users, a company could be liable for any resulting damage caused by a criminal hacker's embedded command-and-control infrastructure. 


The only CIO resolution that matters

CIOs need to own, or at the very least contribute substantively to, the overarching narrative regarding IT’s business context — what is going on, what has gone right, what has gone wrong, which technology developments require action, and so on. Ideally, the office of the CIO would brief the enterprise on a systemic basis on these matters. I am thinking of something similar to the President’s Daily Brief. Four critical decisions need to be made to establish such a brief: what form should the briefing to take, what subject areas should we keep an eye on, which constituencies need to be briefed, and what time frame should we use. Once those decisions have been made, a systemic program of insight capture must also be instituted for the briefings to be effective. Such a learning process — observe, orient — can’t be left to chance. The CIO could assign an individual or set of deputies responsible for enumerating and sharing targeted insights to critical constituencies on a daily, weekly, or monthly basis. This knowledge wrangling — and ignorance vanquishing — operation could work at a departmental or group level and rotate around the staff on an episodic basis.


A lifecycle is a thing that exists from beginning to end. A product lifecycle lasts from inception to decommission. A production lifecycle is a regular way of producing products. A business lifecycle may go from order to ship. A lifecycle is often not even a thing itself, but just the process a thing goes through. I am aging each year. ... Architects build things. We are creative professionals. We make cathedrals. We make solutions. We make better business outcomes in particular ways. I have never seen a group of people who so deeply care about the products they create. Delivery of an outcome based on a business and technology strategy is 90% of architects jobs in practice. This means we touch a LOT of lifecycles. And that is why we must be so very good at navigating them. At discussing pros/cons without religious belief, even without emotion. Building beautiful things is its own reward. How we get there is part of the price we pay to do it. The actual architecture lifecycle then is the method used by ALL of the architects in the practice to deliver value to the organization! But beware, this means all of them. You can’t put business architects in one stack and solution in another and enterprise in a third.


The Elusive Quest for DevSecOps Collaboration

While the concept of DevSecOps has been discussed for years as a best practice for integrating security into development lifecycles, actual adoption has been gradual at best. As Or Shoshani, CEO of cloud security provider Stream Security, explains, "In most of the organizations that we have been working with and exposed to, the SecOps and DevOps are still being separated into two different groups." The reality is that despite widespread consensus on the need for closer collaboration between security and development teams, real-world progress has lagged. Shoshani attributes this to the constant tension between an exciting vision and on-the-ground implementation realities. Just as with past innovations like multi-cloud, he notes, "Everybody talks about it, but the industry isn't ready." Systemic culture shifts take patience. What's behind this lagging evolution? Incumbent challenges around processes, mindsets, and communication persist. Groups accustomed to working in silos and throwing issues "over the wall" resist new rhythms. Security teams are trying to validate each release phase before the next begins trip up accelerated development timetables. And without air cover from leadership, there's little incentive to try.


Building A Secure Foundation: Embracing Best Practices For Coding

Not sure where to start when investing in secure coding practices? Begin with these tips: Organizations should provide developers with comprehensive training on secure coding practices. This training should cover topics such as common vulnerabilities, mitigation techniques and security tools; Organizations should invest in static analysis tools to help developers identify and address vulnerabilities in their code. These tools can automate the process of detecting security flaws, saving developers time and effort; Organizations should create a culture of security awareness within the development team. This culture should encourage open communication about security concerns and promote a shared responsibility for building secure software applications; Developers should stay up to date on the latest security threats and vulnerabilities, which can be achieved by reading security blogs, attending conferences and participating in online forums; Developers should utilize security tools and resources to identify and address potential security flaws in their code. These tools can include static analysis tools, code review tools and security libraries.


From Compliance-First to Risk-First: Why Companies Need a Culture Shift

A paradigm shift is undеrway as businеssеs еvolvе – transitioning from a traditional "Compliancе-First" approach to a more dynamic and forward-thinking "Risk-First" mindset. This cultural shift rеcognizеs that compliancе, whilе еssеntial, should not bе viеwеd in isolation but as an intеgral componеnt of a broadеr risk managеmеnt strategy. This еvolution is not mеrеly a concеptual adjustmеnt but a pragmatic nеcеssity, as organizations sееk to proactivеly idеntify, undеrstand, and mitigatе risks, еnhancing thеir rеsiliеncе and adaptability in an еvеr-changing businеss еnvironmеnt. This еxamination divеs into the importance of companies adopting a cultural transformation. This shift involves shifting from a narrow еmphasis solely on compliancе to a broad and morе stratеgic еmbracе of risk. Bеyond mеrе obligation, this shift fostеrs a culturе that mееts rеgulatory rеquirеmеnts and positions organizations to thrivе amidst uncеrtainty, bolstеring thеir long-tеrm sustainability as wе еxplorе thе complеxitiеs of this changе, wе uncovеr thе fundamеntal connеction bеtwееn compliancе and risk.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham