Daily Tech Digest - January 03, 2024

5 best practices for digital twin implementation

Rather than wait until post-build, consider initiating digital twins during the planning, design, and construction phases of your projects. At the planning stage, this can enable plan simulation and various what-if scenario testing prior to committing to real-world investment. Part of the benefit of digital twins is they can address the full lifecycle from construction twins to operational twins. The digital twins, therefore, know far more than after-the-fact asset management systems, and the learnings and insights captured by the twin during design and build can improve operations and maintenance. According to Rapos, early incorporation allows for better data collection, more accurate modeling, and immediate feedback during the construction or development phase. It’s crucial to understand that digital twins aren’t just a final product, but a dynamic tool that evolves and adds value throughout the project’s life. Delaying its development can result in missed opportunities for optimization and innovation.


Why exit the cloud? 37signals explains

37signals was a significant cloud user with a $3.2 million cloud budget for 2022. The company pledged $600,000 to procure Dell servers, envisioning significant savings during the next five years. Of course, there were questions, and Hansson did an excellent job of addressing them one by one in the FAQ, such as the additional costs in terms of humans needed to run the on-premises systems, how optimization only took them so far in the cloud, and how they handled security requirements. Hansson also explained the limited abilities of cloud-native applications to reduce costs and highlighted the need for a world-class team to address security concerns, which the company has. Notably, privacy regulations and GDPR compliance were underscored as reasons for European companies to opt for self-owned hardware as opposed to relying on the cloud. Of course, this is not the case for everyone. ... Everyone is looking for a single answer, and it doesn’t exist. The requirements of your systems will dictate what platform you should use—not whatever seems trendy. Sometimes the cloud provides the most value, but not always.


Size doesn’t matter!

Small enterprises are less likely to have dedicated IT staff, let alone afford cyber security specialists. Security solutions are usually considered too expensive(Chidukwani 2022) and their technical features come across as overwhelmingly complex to be handled in-house. As a consequence, there is a tendency to rely heavily on external IT vendors that provide sub-optimal support without customized care(Benz 2020). Fear-driven, some business owners take up the reactive route. Instead of a unified threat solution, they continue to buy off-the-shelf security products in response to recent emerging threats, leaving may leakages unplugged and ineffective protection. These human, financial, and technical resource constraints create a puzzling gap between the cyber security awareness of small business leaders and their commensurate commitment to address the risk. Alongside the well-known construct of the ‘digital divide”, academic literature now also acknowledges a ‘security divide’, what with lagging investments in cybersecurity solutions coupled with increasing cyber incidents at SMEs (Heidt et al., 2019).


Cybersecurity challenges emerge in the wake of API expansion

APIs are already the fundamental building blocks of any modern organization today, and that will become even more evident going forward. As organizations look to transform their digital business and enter the era of the API economy, we expect that we will be building and using more and more APIs. That’s especially true if we take a look at some of the trends that are happening in technology nowadays. Things like VR/AR glasses, wearable devices, and voice-controlled devices all require APIs to work. APIs will play a more critical role as the world transitions to more browserless devices. All this growth and expansion means more APIs, requests, and security challenges. The toughest thing about API security is that, in most cases, organizations don’t know that hackers exploit their APIs because they don’t have access to API data in real-time. That’s why tooling, which allows you to do that, will become even more critical.


Attackers Abuse Google OAuth Endpoint to Hijack User Sessions

OAuth enables applications to get access to data and resources to other trusted online services and sites based on permissions set by a user, and it is the mechanism responsible for the authentication handoff between the sites. While the standard is certainly useful, it also presents risk to organizations if it's not implemented correctly, and there are a number of ways attackers can abuse vulnerable instances and the standard itself. For example, security researchers have found flaws in its implementation that have exposed key online services platforms such as Booking.com and others to attack. Meanwhile, others have used malicious OAuth apps of their creation to compromise Microsoft Exchange servers. In the case of the Google endpoint, the OAuth exploit discovered by Prisma targets Google Chrome's token_service table to extract tokens and account IDs of logged-in Chrome profiles, according to CloudSEK. That table contains two "crucial" columns, titled "service (GAIA ID)" and "encrypted_token," Karthick M explained.


Observability in 2024: More OpenTelemetry, Less Confusion

Observability has transcended its traditional association with monitoring to find bugs and to resolve outages, and now extends its influence across different interfaces, tools, and demonstrating enhanced openness and compatibility to increasingly make forecasts. These frecasts can involve predicting outages before they happen, cost shifts, resources usage and other variables that certainly would be much harder and mostly involve trial and error previously. ...  “This means that organizations can now use a single agent to collect observability data across their increasingly distributed and therefore complex universe of microservices applications,” “This could significantly simplify one of today’s most significant pain points in observability: instrumentation. Developers can now benefit from the continuously increasing auto-instrumentation capabilities of OpenTelemetry and no longer have to worry about instrumenting their code for specific observability platforms,” Volk said. However, such a freedom of choice due to a proliferation of tools has created challenges of its own.


IT’s Key Role in Planting ESG Effort

The one thing we know about all compliance measures is that they require new levels of integration that the company usually lacks. If you can focus on integration work now, you will be more agile-and better prepared for ESG regs when they hit. Keep your ears to the ground - You can learn a lot about the directions ESG is taking from your outside audit firms, regulators and your internal legal or regulatory department. These entities already have information in advance on future ESG directions and what laws or regulations are likely to be forthcoming. Do your part internally - Several years ago, I was visiting with the CIO of a large healthcare company in the Northeast. He told me that the company wanted to trim its carbon footprint and that the first place the company looked for tangible results was in the data center. “This prompted us to move more IT to the cloud, and even to build a new, eco-friendly data center,” he said. “We virtualized servers as much as possible, reduced energy consumption, mandated that all new equipment we purchased used less power, and even redid the HVAC unit airflows.”


Why 2024 will be the year of ‘augmented mentality’

With this AI technology now available for consumer use, companies are rushing to build them into systems that can guide you through your daily interactions. This means putting a camera, microphone and motion sensors on your body in a way that can feed the AI model and allow it to provide context-aware assistance throughout your life. The most natural place to put these sensors is in glasses, because that ensures cameras are looking in the direction of a person’s gaze. Stereo microphones on eyewear (or earbuds) can also capture the soundscape with spatial fidelity, allowing the AI to know the direction that sounds are coming from — like barking dogs, honking cars and crying kids. In my opinion, the company that is currently leading the way to products in this space is Meta. Two months ago they began selling a new version of their Ray-Ban smart glasses that was configured to support advanced AI models. The big question I’ve been tracking is when they would roll out the software needed to provide context-aware AI assistance.


Google flaunts concurrency, optimization as cloud rivals overhaul platforms

Kazmaier explains that Google’s approach to concurrency avoids spinning up more virtual machines and instead improves performance on a sub-CPU level unit. “It moves these capacity units seamlessly around, so you may have a query which is finishing and freeing up resources, which can be moved immediately to another query which can benefit from acceleration. All of that micro-optimization takes place without the system sizing up. It's constantly giving you the ideal projection of the capacity you use on the workloads you run,” he says. A paper from Gartner earlier last year approved of the approach. "A mix of on-demand and flat-rate pricing slot reservation models provides the means to allocate capacity across the organization. Based on the model used, slot resources are allocated to submitted queries. Where slot demand exceeds current availability, additional slots are queued and held for processing once capacity is available. This processing model allows for continued processing of concurrent large query workloads," it says.


As AI Advances, Who Is Looking to Its Architecture?

There is a case to be made, though, that enterprise architects have a much more fundamental role to play in our current phase of technological evolution than simply implementing its advancements into our workflows. AI solutions must seek to enhance the role of the enterprise architecture and their productivity, not attempt to supplant it. Standards are important not just because they enable collaboration, but because they build consensus. A successful standard draws on the insights and expertise of the whole community of practitioners which needs to use it. In that process, many conversations are had – and occasionally quite fraught ones – in the interest of finding a common understanding of what a good, mature, responsible, successful approach looks like. One that puts the human at the center of the decision loop. The point of listing so many of AI’s potential positive outcomes earlier in this article was not just to emphasize how dramatic and wide-ranging its impact could be. 



Quote for the day:

"People often say that motivation doesn't last. Well, neither does bathing - that's why we recommend it daily." -- Zig Ziglar

Daily Tech Digest - January 02, 2024

Decoding the Black Box of AI – Scientists Uncover Unexpected Results

“If the GNNs do what they are expected to, they need to learn the interactions between the compound and target protein and the predictions should be determined by prioritizing specific interactions,” explains Prof. Bajorath. According to the research team’s analyses, however, the six GNNs essentially failed to do so. Most GNNs only learned a few protein-drug interactions and mainly focused on the ligands. Bajorath: “To predict the binding strength of a molecule to a target protein, the models mainly ‘remembered’ chemically similar molecules that they encountered during training and their binding data, regardless of the target protein. These learned chemical similarities then essentially determined the predictions.” According to the scientists, this is largely reminiscent of the “Clever Hans effect”. This effect refers to a horse that could apparently count. How often Hans tapped his hoof was supposed to indicate the result of a calculation. As it turned out later, however, the horse was not able to calculate at all, but deduced expected results from nuances in the facial expressions and gestures of his companion.


Why 2024 is the year for IT managers to revamp their green IT plans

Conversations with enterprise operators reveal that many do not consolidate applications when they refresh their IT equipment and are hesitant to deploy power-aware workload management tools out of concern for impacting the reliability of their IT operations. IT managers must guide their organisations to intelligently utilise their available equipment capacity, using software tools to measure, manage, and maximise utilisation within reliability constraints. All organizations should set equipment utilisation goals and build multi-year efficiency project plans to improve IT infrastructure energy performance. Data available from IT equipment manufacturers indicate that workloads on two to eight old servers can be migrated to one new server when deploying n+1 (e.g., Intel or AMD CPU generation 3 to 4) or n+2 (e.g., Intel or AMD CPU generation 2 to 4) technology. Similar improvements can be achieved in storage and network equipment. Consolidating CPU workload and storage and network operations delivers a defined workload on three-quarters to one-half of the equipment. 


AI Everywhere, All the Time: Top Developments of 2023

AI-powered robots are increasingly automating tasks in manufacturing and logistics, among other industries, driving efficiency and changing the nature of work. Some major milestones include Tesla's Optimus Bot prototype demonstrating dexterous and adaptable humanoid robots, which could shape future automation solutions. Separately, Boston Dynamics' Atlas showcased its parkour skills, paving the way for applications in search and rescue or disaster response. The AlphaFold 2 AI system, developed by Alphabet subsidiary DeepMind, can perform predictions of protein structure, and stands to revolutionize drug discovery and personalized medicine, carrying the potential for helping mitigate numerous diseases. Robotic surgery systems grow ever more sophisticated, while AI-powered prosthetics offer amputees greater control and functionality. AI algorithms are already assisting doctors in medical diagnosis for diseases such as cancer, offering increased accuracy and early detection possibilities. 


How Gamification Can Help Your Business

At work, gamification is often used to build employee experience by promoting fun competition and immersive learning experiences, leading to better information retention and a heightened incentive to engage in ongoing learning and upskilling, Ringman says. Gamification is frequently used to boost staff productivity. “In any business, there are many things that need to be done every day that many of us aren’t naturally motivated to do,” Avila observes. Gamification, provides helpful context, guidance, and rewards, allowing tasks to be completed faster and more efficiently while improving focus. “This, in turn, helps the company achieve larger business goals.” Brands can also tap into gamification as they strive to engage customers and transform ordinary interactions into memorable experiences. Ringman notes that brands can use gamification to add extra fun to loyalty programs by hosting contests and competitions, as well as awarding virtual badges and trophies to customers as they complete various actions or pass significant milestones.


Envisioning a great future – India as a SuperPower

A nation’s growth is underpinned with technological advancement and how swiftly it adopts tech. During the recent state visit of India’s Prime Minister, Narendra Modiji to the United States, he put a lot of emphasis on growing technology that will revolutionize various industries. India is fast moving towards digitization. The thrust from the Government of India with the Digital India initiative and the growing use of digital technology such as Artificial Intelligence, Machine Learning and Data Analytics across various private organisations is bringing a phenomenal shift in India’s growth and development. Secondly, there is going be a lot of disruption in the way we work. With AI, lots of work will be done by BOTS, so it is important to have highly skilled labor to manage the AI which will also require upscaling the work, as we will have more leisure. The way society works will change and need to be adaptable. AI tools can be used as an Add-on tool to enable our lawyers, CAs, economists and leaders at large. Today, India is a force to be reckoned with in the domain of Information Technology without an iota of doubt.


Wi-Fi 7’s mission-critical role in enterprise, industrial networking

Wi-Fi 7 devices can use multi-link operation (MLO) in the 2.4 GHz, 5 GHz, and 6 GHz bands to increase throughput by aggregating multiple links or to quickly move critical applications to the optimal band using seamless switching between links. Fast link switching allows Wi-Fi 7 devices to avoid interference and access Wi-Fi channels without delaying critical traffic. This and other new features also make Wi-Fi 7 ideal for immersive XR/AR/VR, online gaming and other consumer applications that require high throughput, low latency, minimal jitter, and high reliability. ... Naturally there are challenges with achieving seamless connectivity between 5G & Wi-Fi. A lot of industry alignment is needed to enable frictionless movement between networks, across technologies, vendors, and areas such as authentication, QoS, QoE and security. The Wireless Broadband Alliance is playing a key role in bringing all the stakeholders (operators, enterprises and network owners) together to ensure collaboration and alignment on the frameworks that will deliver seamless connectivity.


Data Center Governance Trends to Watch in 2024

Historically, data center governance did not drive frequent conversations in the data center industry. Data center operators sometimes talked about it, but it has not tended to be a core area of concern – perhaps because, unlike other types of governance, data center governance isn't a requirement for businesses seeking to meet regulatory rules or avoid compliance fines. Looking ahead, however, governance in data centers is likely to become a more common item of discussion. Data centers have now matured to the point that businesses are increasingly keen to squeeze as much efficiency as possible out of them. In the past, disorganized data center assets or lack of optimal server room layouts may not have been critical. But today, data center operators face growing pressure to maximize the efficiency of their facilities. Certain regulators are now requiring disclosures about data center emissions, for example, meaning that increasing energy efficiency through effective governance practice has become important for protecting business's brands and reputations.


Essential skills for today’s threat analysts

Very often, for instance, there's an urgent need to communicate a new vulnerability to different audiences, which demands tailored communications for technical teams, CISOs, and board members. Williams highlights task management and patience, especially when dealing with uncertain or misleading information, and above all, coordinating between different sources of information. "So much of threat hunting today relates to that living off the land kind of thing where you're seeing things that look malicious. And so oftentimes you’re developing hypotheses and that involves consulting system admin and working toward a resolution," says Williams. ... It's also a mind game, with threat hunters needing to be highly adaptable as threats are changing daily, sometimes hourly. "You need to change with them. Never allow an inflexible mind to pervade your operational approach," says Brian Hussey, VP of threat hunting, intelligence and DFIR at SentinelOne. At the same time, you also need to see the forest through the trees. "Often threat actors introduce surface changes to their attack patterns, but core modus operandi remains unchanged, leaving important opportunities to identify and eliminate new attacks, even before they arrive," Hussey tells CSO.


Want to tackle technical debt? Sell it as business risk

There is no magic potion that can eliminate all technical debt, but technical debt can be attacked via budgeting if technical debt is not just perceived as upgrading IT infrastructure. What CIOs need to do instead is to present IT infrastructure investment as an important corporate financial and risk management issue that the business can’t afford to ignore. ... Technical budget justifications for IT infrastructure upgrades, which are seldom linked to end business strategies, make it easy for budget decision-makers to defer IT infrastructure investment. Instead, budget decision-makers figure that the company can “make do” because IT will somehow find a way to keep systems running. CIOs must change this thinking. They can start the process by changing IT infrastructure investment justifications from technical explanations to corporate financial and risk management explanations. ... CIOs should also team with the CFO to help reframe the tech debt narrative, because CFOs are always on the lookout for new corporate financial and risk management scenarios. 


Leveraging Leadership: The Fourfold Path to Business Control

Belief systems function as a mechanism for communicating the core values, objectives, and mission of the organization, thus providing guidance and motivation to staff members. By encouraging people to improve their customer service through the inculcation of positive values, conduct, performance, and a feeling of inclusion, this lever ensures the fulfillment of the organization's objectives. In the absence of a clearly-defined Belief System, employees are forced to depend on conjecture regarding the organization's intended behaviors and objectives. ... Without stifling individuals' capacity for innovation or entrepreneurship, this control mechanism permits the development of policies and standards that instruct individuals on bad behavior. Boundary systems implement regulations, codes of conduct, and premeditated strategic boundaries to delineate acceptable and abhorrent employee conduct, thereby establishing governing parameters. These boundaries clearly define the irreversible consequences of violating ethical principles and the potential outcomes that should be avoided. 



Quote for the day:

"It is better to fail in originality than to succeed in imitation." -- Herman Melville

Daily Tech Digest - January 01, 2024

4 key devsecops skills for the generative AI era

CIOs and IT leaders must prepare their teams and employees for this paradigm shift and how generative AI impacts digital transformation priorities. Nicole Helmer, VP of development and customer success learning at SAP, says training must be a priority. “Companies should prioritize training for developers, and the critical factor in increasing adaptability is to create space for developers to learn, explore, and get hands-on experience with these new AI technologies,” she says. The shift may be profound and tactical as more IT automation becomes productized, enabling IT to shift to more innovation, architecture, and security responsibilities. “In light of generative AI, devops teams should deprioritize basic scripting skills for infrastructure provisioning and configuration, low-level monitoring configurations and metrics tracking, and test automation, says Dr. Harrick Vin, chief technology officer of TCS. “Instead, they should focus more on product requirements analysis, acceptance criteria definition, software, and architectural design, all of which require critical thinking, design, strategic goal setting, and creative problem-solving skills.”


Don't neglect API functional testing

The first step to building a successful API functional testing strategy is to understand each API, its functions and its requirements. API requirements are often found within API documentation, but specific and necessary details are sometimes omitted. Work with the API developers to ensure documentation includes the expected behavior under all scenarios, error conditions and status codes, the API's purpose and objective, and how the API affects the application workflow. As a QA tester responsible for functional API testing, create a test plan and approach. Next, select an API testing tool that enables testers to create and execute both automated and manual tests. Many existing QA and developer tools include an option for API testing. Check the capabilities of your existing tools before adding another. Next, create a test plan, and develop test cases. Once the test cases are created, organize them into all working combinations. One option is to create tests and then execute them. Or, within many tools, testers can quickly test as they go. In other words, you can be testing each request as you develop the test.
Decentralization stands as one of the most profound principles championed by blockchain. While the term often evokes images of intricate algorithms and cryptographic nodes, its implications on leadership and organizational structuring are profound. At its core, decentralization heralds a departure from the age-old top-down management models. Consider the rise of decentralized finance (DeFi) platforms, which are disrupting traditional banking systems. Instead of a centralized authority making decisions, these platforms empower their users through consensus mechanisms and democratized governance. Compound, a leading DeFi platform, is a testament to this. It operates with a decentralized governance model where token holders propose, discuss, and implement changes to the platform. This not only ensures transparency, but also inculcates a deep sense of ownership among its participants. This decentralization isn't just confined to the crypto realm. Businesses are realizing the value of distributed decision-making. For instance, the Spotify model of team organization, where squads, tribes, chapters, and guilds collaborate across functions, exemplifies a shift from rigid hierarchies to fluid, decentralized structures. 


Shaping finance through technological prowess

In the present scenario, technology stands as the cornerstone of well-informed decision-making for CFOs. The integration of data analytics and artificial intelligence can equip CFOs with robust tools to dissect vast data sets, enabling them to make precise predictions and optimise resource allocation. For instance, predictive analytics has emerged as a powerful instrument that can enable CFOs to anticipate market trends and customer behaviour, thereby guiding financial strategies with unprecedented precision. Consider a scenario where a CFO of a manufacturing company leverages data analytics to optimise inventory management. By analysing historical sales data, production rates, and external market factors, the CFO can use tools to predict demand fluctuations and adjust inventory levels accordingly. This approach may not only minimise excess inventory costs but also ensure that the company is well-prepared to meet customer demands swiftly. The financial decision-making process has transitioned from a reactive stance to one driven by data-driven insights, propelling the company toward financial agility.


Soon, every employee will be both AI builder and AI consumer

The time could be ripe for a blurring of the lines between developers and end-users, a recent report out of Deloitte suggests. It makes more business sense to focus on bringing in citizen developers for ground-level programming, versus seeking superstar software engineers, the report's authors argue, or -- as they put it -- "instead of transforming from a 1x to a 10x engineer, employees outside the tech division could be going from zero to one." ... Automated platforms and generative AI -- leveraged within an open and supportive corporate culture -- may amplify many human skills, they continue. "10x engineers could become much less rare. Especially as generative AI continues to bolster developer productivity and opens up a future of increased workplace automation, many of today's hindrances may not be relevant in the next five to 10 years." It's all about fostering a superior "developer experience," not just within IT shops, but across the enterprise as well. "As technology itself continues to become more and more central to the business, technology tasks and required talent will likely become central as well. ..."


How CTOs can win over the board room

Now is the time for engineering leaders to showcase engineering’s value. There’s not a single business that hasn’t been impacted by resource tightening over the last year. While CFOs are increasingly focusing on cost optimization within their businesses, they continue to prioritize growth, according to a survey by Gartner. Engineering leaders must show how they’re driving this growth. Engineering leaders who couldn’t clearly show major business impact were the first to see cuts during 2022 recession concerns. While the rest were forced to “do more with less,” they were at least able to sustain critical projects and fight for their headcount. Why? Because they clearly communicated the importance of specific investments and projects to the business’s success. No one can argue the last year has been easy for leaders across the board. But I believe good is coming from these challenges. It has forced engineering leaders to scrutinize their investments and allowed them to identify their most critical assets, enabling them to innovate even during economic uncertainty.


Data Privacy Paradox: Balancing Innovation with Protection in the Age of AI

While AI’s potential for progress shines bright, its foundation rests upon a vast ocean of personal data – our online activity, location trails, and even social media whispers. This dependence raises a chilling specter: data surveillance. The specter of governments and corporations peering over our digital shoulders, gleaning insights into our lives, fuels fears of mass surveillance and the potential misuse of this sensitive information. This specter chills not only with its invasive nature, but also with its chilling implications for individual freedoms and potential abuses of power. But the concerns go beyond the watchful eye of Big Brother. AI’s algorithms, trained on vast datasets, can become unwitting vessels of algorithmic bias. Imagine a credit scoring system fueled by biased data, unfairly disadvantaged certain demographics. Or a criminal justice system where AI-powered predictions exacerbate existing prejudices. These are not dystopian nightmares; they are real possibilities if we fail to address the inherent biases that can creep into the heart of AI. Furthermore, the inner workings of these algorithms often remain shrouded in a veil of secrecy.


Why You Are So Resistant to Change — And How to Overcome It

As an entrepreneur, your ability to change and adapt is arguably the single most important contributor to long-term success. Stagnant businesses simply can't flourish, grow or (like those heart patients unwilling to modify their habits) survive. Ask yourself, how receptive are you to transformation in yourself, your processes, and your entire organization? Now is the time to evolve as a business owner. Start with an unwavering desire for continuous improvement. The next step is finding that emotional connection and the people or groups who can support you on your journey of change. For business leaders, these relationships are often found outside of one's own company in the form of peer advisory boards or mastermind groups. Peer advisory boards provide business owners with the requisite support and emotional connection that act as catalysts for forward progress and even innovation. As the president and CEO of such an organization, I get to witness the transformative power of connection all the time. It is truly amazing to see what can happen between owners and executives who care about each other's welfare and respect, support and elevate each other on their paths to transformation.


Open Source in 2024: More Volatility, More Risk, More AI

But there’s plenty in the way of increased international cooperation around tech – or indeed, international cooperation around anything. To paraphrase a former British prime minister, the greatest challenge for a leader is, “Events, dear boy. Events” If the last three years have been event-packed, 2024 will be equally so, not least because of an unprecedented number of elections due, including the U.S. presidential race. These elections become cybersecurity incidents themselves. But they could also herald and shape further regulation and legislation that could directly affect the open source world Both the U.S. and E.U. have been putting in place legislation and regulation around AI, but it is 2024 that will see how these efforts start playing out in the real (virtual) world. The European Union’s Cyber Resiliency Act will also come into effect in 2024. Recently announced revisions have reportedly made it less overtly problematic for open source, but the final text is yet to be released. At the same time, the U.S. has already been turning the technology screws on China and Russia, choking off exports of GPUs to the former, for example, and enforcing wide-ranging sanctions on the latter.


Infrastructure, Operations Leaders Must Focus on DevOps, SRE Initiatives

Rajesh Ganesan, president of IT ManageEngine, notes data breaches and data privacy law violations can do irreparable damage to an organization's reputation. “By making privacy and data governance a top priority in 2024, I&O leaders can ensure their organizations are compliant with privacy laws and protected against data breaches,” he explained in an email interview. “It's crucial that every employee in the organization takes personal responsibility for data privacy.” Ganesan points out if organizations have the financial means, it is wise to invest in private data centers. “Organizations that invest in their own domain controller and security operations can control their security posture and make sure poor levels of security from the public service provider does not affect them,” he says. Not only are these companies protected from any breaches that occur in a public cloud environment, but they also have an easier time complying with legislation, as specific control measures can be put in place.



Quote for the day:

"Don't judge each day by the harvest you reap but by the seeds that you plant." -- Robert Louis Stevenson

Daily Tech Digest - December 29, 2023

5 Ways That AI Is Set To Transform Cybersecurity

Cybersecurity has long been notoriously siloed, with organizations installing many different tools and products, often poorly interconnected. No matter how hard vendors and organizations work to integrate tools, coalescing all relevant cybersecurity information into one place remains a big challenge. But AI offers a way to combine multiple data sets from many disparate sources and provide a truly unified view of an organization’s security posture, with actionable insights. And with generative AI, gaining those insights is so easy, a matter of simply asking the system questions such as “What are the top three things I could do today to reduce risk?” or “What would be the best way to respond to this incident report?” AI has the potential to consolidate security feeds in a way the industry has never been able to quite figure out. Generative AI will blow up the very nature of data infrastructure. Think about it: All the different tools that organizations use to store and manage data are built for humans. Essentially, they’re designed to segment information and put it in various electronic boxes for people to retrieve later. It’s a model based on how the human mind works.


Microservices Resilient Testing Framework

Resilience in microservices refers to the system's ability to handle and recover from failures, continue operating under adverse conditions, and maintain functionality despite challenges like network latency, high traffic, or the failure of individual service components. Microservices architectures are distributed by nature, often involving multiple, loosely coupled services that communicate over a network. This distribution often increases the system's exposure to potential points of failure, making resilience a critical factor. A resilient microservices system can gracefully handle partial failures, prevent them from cascading through the system, and ensure overall system stability and reliability. For resilience, it is important to think in terms of positive and negative testing scenarios. The right combination of positive and negative testing plays a crucial role in achieving this resilience, allowing teams to anticipate and prepare for a range of scenarios and maintaining a robust, stable, and trustworthy system. For this reason, the rest of the article will be focusing on negative and positive scenarios for all our testing activities.


Skynet Ahoy? What to Expect for Next-Gen AI Security Risks

From a cyberattack perspective, threat actors already have found myriad ways to weaponize ChatGPT and other AI systems. One way has been to use the models to create sophisticated business email compromise (BEC) and other phishing attacks, which require the creation of socially engineered, personalized messages designed for success. "With malware, ChatGPT enables cybercriminals to make infinite code variations to stay one step ahead of the malware detection engines," Harr says. AI hallucinations also pose a significant security threat and allow malicious actors to arm LLM-based technology like ChatGPT in a unique way. An AI hallucination is a plausible response by the AI that's insufficient, biased, or flat-out not true. "Fictional or other unwanted responses can steer organizations into faulty decision-making, processes, and misleading communications," warns Avivah Litan, a Gartner vice president. Threat actors also can use these hallucinations to poison LLMs and "generate specific misinformation in response to a question," observes Michael Rinehart, vice president of AI at data security provider Securiti. 


Cybersecurity teams need new skills even as they struggle to manage legacy systems

To stay ahead, though, security leaders should incorporate prompt engineering training for their team, so they can better understand how generative AI prompts function, the analyst said. She also underscored the need for penetration testers and red teams to include prompt-driven engagements in their assessment of solutions powered by generative AI and large language models. They need to develop offensive AI security skills to ensure models are not tainted or stolen by cybercriminals seeking intellectual property. They also have to ensure sensitive data used to train these models are not exposed or leaked, she said. In addition to the ability to write more convincing phishing email, generative AI tools can be manipulated to write malware despite limitations put in place to prevent this, noted Jeremy Pizzala, EY's Asia-Pacific cybersecurity consulting leader. He noted that researchers, including himself, have been able to circumvent ethical restrictions that guide platforms such as ChatGPT and prompt them to write malware.


The relationship between cloud FinOps and security

Established FinOps and cybersecurity teams should annually evaluate their working relationship as part of continuous improvement. This collaboration helps ensure that, as practices and tools evolve, the correct FinOps data is available to cybersecurity teams as part of their monitoring, incident response and post-incident forensics. The FinOps Foundation doesn't mention cybersecurity in its FinOps Maturity Model. But, in all rights, FinOps and cybersecurity collaboration indicates a maturing organization in the model's Run phase. Ideally, moves to establish such collaboration should show themselves in the Walk stage. ... Building a relationship between the FinOps and cybersecurity teams should start early when an organization chooses a FinOps tool. A FinOps team can better forecast expenses, plan budget allocation and avoid unnecessary costs by understanding security requirements and constraints. These forecasts result in a more cost-effective and financially efficient cloud operation, so plan for some level of cross-training between the teams.


What is GRC? The rising importance of governance, risk, and compliance

Like other parts of enterprise operations, GRC comprises a mix of people, process, and technology. To implement an effective GRC program, enterprise leaders must first understand their business, its mission, and its objectives, according to Ameet Jugnauth, the ISACA London Chapter board vice president and a member of the ISACA Emerging Trends Working Group. Executives then must identify the legal and regulatory requirements the organization must meet and establish the organization’s risk profile based on the environment in which it operates, he says. “Understand the business, your business environment (internal and external), your risk appetite, and what the government wants you to achieve. That all sets your GRC,” he adds. The roles that lead these activities vary from one organization to the next. Midsize to large organizations typically have C-level executives — namely a chief governance officer, chief risk officer, and chief compliance officer — to oversee these tasks, McKee says. These executive lead risk or compliance departments with dedicated teams.


Revolutionising Fraud Detection: The Role of AI in Safeguarding Financial Systems

Conventional fraud detection methods, primarily rule-based systems, and human analysis, have proven increasingly inadequate in the face of evolving fraud tactics. Rule-based systems, while effective in identifying simple patterns, often struggle to adapt to the ever-changing landscape of fraud. Fraudsters have stronger motivation and they evolve faster than the rules in the rules engine. ... The same volumes of data that are overwhelming for traditional fraud detection systems are fuel for AI. With its ability to learn from vast amounts of data and identify complex patterns, AI is poised to revolutionize the fight against fraud. ... While AI offers immense potential, it’s crucial to acknowledge the challenges associated with its adoption. Data privacy concerns, ethical considerations around algorithmic bias, and the need for robust security measures are all critical aspects that demand careful attention. As AI opens new frontiers in fraud prevention, unregulated AI technology such as deepfake in the wrong hands could also enable sophisticated impersonation scams. However, the benefits of AI far outweigh the challenges. 


API security in 2024: Predictions and trends

The rapid rate of change of APIs means organizations will always have vulnerabilities that need to be remediated. As a result, 2024 will usher in a new era where visibility will be a priority for API security strategies. Preventing attackers from entering the perimeter is not a 100% foolproof strategy. Whereas having real-time visibility into a security environment will enable rapid responses from security teams that neutralize threats before they impact operations or extract valuable data. ... With the widespread use of APIs, especially in sectors such as financial services, regulators are looking to encourage transparency in APIs. This means data privacy concerns and regulations will continue to impact API use in 2024. In response, organizations are becoming weary of having third parties hold and access their data to conduct security analyses. We expect to see a shift in 2024 where organizations will demand running security solutions locally within their own environments. Self-managed solutions (either on-premise or private cloud), eliminate the need to filter, redact, and anonymize data before it’s stored.


The Terrapin Attack: A New Threat to SSH Integrity

Microsoft’s logic is that the impact on Win32-OpenSSH is limited This is a major mistake. Microsoft’s decision allows unknown server-side implementation bugs to remain exploitable in a Terrapin-like attack, even if the server got patched to support “strict kex.” As one Windows user noted, “This puts Microsoft customers at risk of avoidable Terrapin-style attacks targeting implementation flaws of the server.” Exactly so. You see, for this protection to be effective, both client and server must be patched. If one or the other is vulnerable, the entire connection can still be attacked. So to be safe, you must patch and update both your client and server SSH software. So, if you’re Windows and you haven’t manually updated your workstations, their connections are open to attack. While patches and updates are being released, the widespread nature of this vulnerability means that it will take time for all clients and servers to be updated. Because you must already have an MITM attacker in place to be vulnerable, I wouldn’t go spend the holiday season worrying myself sick. I mean, you’re sure you don’t already have a hacker inside your system, right? Right!?


Supporting Privacy, Security and Digital Trust Through Effective Enterprise Data Management Programs

Those professionals responsible for supporting privacy efforts should therefore prioritize effective enterprise data management because it is integral to safeguarding individual’s privacy. A well-structured data management framework works to ensure that personal information is handled ethically and compliant with regulations, while fostering a culture of responsible data stewardship within organizations. When done right, this reinforces trust with stakeholders, serves as a differentiator in the marketplace, improves visibility into data ecosystems, expands reliability of data, and optimizes scalability and innovative go to market efforts. ... Most, if not all, of the global data privacy laws and regulations require data to be managed effectively. To comply with these laws and regulations, organizations must first understand the data they collect, the purposes for its collection, how it is used, how it is shared, how it is stored, how it is destroyed, and so on. Only after organizations have a full understanding of their data ecosystem can they begin to implement effective controls to both protect data and preserve the ability of the data to achieve intended operational goals.



Quote for the day:

"Too many of us are not living our dreams because we are living our fears." -- Les Brown

Daily Tech Digest - December 28, 2023

CISO: Top 10 Trends for 2024

Mike highlighted recent legal cases involving CISOs, expressing concern about the unprecedented accountability of security professionals and the potential for them to be scapegoated. He discussed cases like Joe Sullivan at Uber and Tim Brown at SolarWinds, emphasizing the SEC's issuance of a Wells Notice for a CISO, a first in history. Mike questioned the trend of holding CISOs responsible for issues beyond their control and predicted a continued exodus of CISOs from their roles due to perceived lack of support. Yogesh offered a contrasting view, suggesting that recent cases may serve as catalysts for elevating the role of CISOs and improving security programs. ... Nitin addressed the widespread reliance on third parties in today's technological landscape and the need for continuous due diligence beyond initial assessments. Nitin emphasized the importance of close coordination and regular conversations with key third-party providers, highlighting the significance of vendor management skills and understanding the scope of responsibilities. Yogesh brought up the concept of shared responsibility models inspired by the practices of AWS and Amazon, emphasizing the need for a prioritized and evolving approach to third-party risk management.


Why People Should Be at the Heart of Operational Resilience

Embracing the ethos of “you build it, you run it” isn’t necessarily a bad thing, but turning it into a fetish can easily lead us into a place where failures and faults become the responsibility of individuals. That’s not good for anyone, humans or technology. “If the resilience of a system depends on humans never making mistakes, then the system is really brittle,” Shortridge said. “Humanity’s success is because of our creativity and ability to adapt; it isn’t because we’re great at doing the same thing the same way every time, or can memorize 50 things on a checklist that we never forget.” Although DevOps is well-intentioned in attempting to break down barriers, it has arguably contributed to a broader organizational discomfort with failure — a desire to control and minimize risk. “Many organizations struggle with the existential angst of wanting to prevent anything bad from ever happening,” Shortridge claimed. This she added, is ultimately “an impossible goal … It’s a downward spiral where the fear of things going wrong results in a slower, heavier approach, which actually increases the likelihood of things going wrong – as well as hindering the ability to swiftly recover from failure.”
Managing vendor partners is not a “one-and-done” activity. That’s why technology is so crucial to keep this process from being a herculean effort and make continuous monitoring more realistic throughout every stage of the vendor lifecycle. As an example, consider the initial assessment stage, when companies invite vendors to bid or pitch their services. Security questionnaires should be required at this point, especially for prospects that would be gaining full access to systems. These questionnaires can be automated to start, while still allowing respondents to supplement responses or resources. It's also a good idea to require a security audit report to illuminate any gaps that would need to be addressed before a contract gets signed. Regardless of the size or influence of vendor prospects, companies should always do their due diligence when it comes to assessing risks to avoid easily preventable attacks. Companies should provide a contract to approved vendors that clearly outlines compliance expectations — including a timeline of how long they have to fix any issues identified in the earlier security audit. 


Getting the most from cloud parking

Cloud parking, a component of FinOps, is the practice of shutting down cloud resources when your business is not using them. For example, if you have a cloud server instance running on a service like EC2, turning the server off when it's not hosting an active workload is an example of cloud parking. Later, if you want to use the server again, you'd "unpark" it by starting the instance back up. Cloud parking is important because almost all cloud services charge, at least in part, based on total running time. By parking cloud resources that you're not actively using, you stop the pricing meter and avoid paying for resources you don't actually need. ... Most types of cloud data resources, such as databases or storage volumes, can't be shut off in the same way that compute resources can, so businesses end up paying for their data even after applications that interact with the data are no longer running. With a sophisticated toolset that allows you to convert between data storage types quickly, it's possible to minimize this cost. For instance, imagine you shut down an EC2 instance and want to stop paying for the EBS volume that the instance uses.


The secret to making data analytics as transformative as generative AI

Unstructured and ungoverned data lakes, often built around the Hadoop ecosystem, have become the alternative to traditional data warehouses. They’re flexible and can store large amounts of semi-structured and unstructured data, but they require an extraordinary amount of preparation before the model ever runs. ... “The power of GPUs allows them to analyze as much data as they want,” Leff says. “I feel like we’re so conditioned — we know our system cannot handle unlimited data. I can’t just take a billion rows if I want and look at a thousand columns. I know I have to limit it. I have to sample it and summarize it. I have to do all sorts of things to get it to a size that’s workable. You completely unlock that because of GPUs.” RAPIDS, Nvidia’s open-source suite of GPU-accelerated data science and AI libraries also accelerates performance by orders of magnitude at scale across data pipelines by taking the massive parallelism that’s now possible and allowing organizations to apply it toward accelerating the Python and SQL data science ecosystems, adding enormous power underneath familiar interfaces.


3 Strategies For Turning Uncertainty Into a Clear Path Forward

To stand up to uncertainty, you must start reframing it as an opportunity. For leaders, rapidly multiplying unknowns increases the pressure to rebuild and reimagine their businesses. Though the journey from uncertainty to clarity is formidable, addressing challenges with a lens of opportunity leads to more and better innovation. ... A focus on simplicity can help. Simplicity is about focusing on the right things rather than doing things right. It's about focusing on the fundamentals, such as customer needs, and simple but powerful questions, such as "what do they need?" that help you get to the core of a problem and ensure you're solving the right one. "Keep it simple" means focusing on the strongest growth opportunities and having the courage to get rid of efforts that don't move the needle. ... For many who have "grown up" in large, resource-rich corporate environments, there is an instinct to default to resources (e.g. budget, headcount) to solve problems. However, my research over 15 years has shown that constraints can help navigate uncertainty. How? By activating creativity and ingenuity and relying on existing resources rather than waiting for additional resources to get started.


AI Investments We Shouldn't Overlook

AI is not a product -- it’s an ever-growing cycle of data usage, and people can be a huge factor in its failure. This leads us back to trust, as most people don’t trust the technology or the leaders working to regulate it. According to Pew Research, 52% of Americans say they feel more concerned than excited about the increased use of artificial intelligence. Those concerns are particularly strong in communities historically underrepresented in the design and deployment of technology. Meaningful participation including communities of users, impacted, as well as creators will improve ethical inquiry, help reduce harmful biases, and build confidence in AI’s fairness. To help allay these concerns, we need to have “seats at the table” for people with broader domain expertise. This is especially vital in areas such as health, finance, and law enforcement where bias has existed historically and is still a serious concern. Additionally, we should consider funding the National Science Foundation’s National AI Research Resource Task Force, and similar efforts, to reduce the economic barriers of entry into AI professions.


How to turn shadow IT into a culture of grassroots innovation

Balancing innovation with IT control remains necessary. Cybersecurity, including privacy and data protection, is considered the top business risk by corporate leaders. Your organization’s risk tolerance will depend on its culture, customers, and industry. Many aspects of security will be non-negotiable, but many can be solved by listening to users and evolving how you use platforms and services. One of the main risks associated with shadow IT is being blind to where company data lives. Without control, you can’t apply consistent policies. Let teams know why security processes are necessary and which standards any platform or tool must meet. Work to understand the business purpose of the adoption so you can help them find an alternative if their initial choice doesn’t meet those standards. The goal is to help users make intelligent security decisions – or help them behave securely by default – while enabling them to take advantage of technology that enhances their work. For example, by adopting a single sign-on solution with multi-factor authentication, you can solve access issues and give people a wider choice of apps and services while maintaining centralized visibility.


Unstructured Data Management Predictions for 2024

Data is increasingly in motion as IT needs to leverage new storage technologies and satisfy new business requirements. Enterprise data migrations of unstructured file and object data have long been complex and too manual and often require professional services. Automation and AI tools will change this, enabling intelligent, efficient data migrations that no longer need IT managers to babysit them and they will also be adaptive. Modern tools will know how to solve problems on the fly and self-remediate and will be able to recommend optimal storage tiers for different unstructured data workloads and use cases. This is a timely development, as data migrations are becoming more varied all the time and dependent upon the customer's changing environment — from firewall to network connections to security configurations. ... Unstructured data management will deliver affordable resiliency at a fraction of the cost, by creating cheap copies in durable object storage in the cloud for non-critical data — which is the bulk of all data in storage. This "poor man's data resiliency" approach will complement the 3x backup method for mission-critical data to create a cost-effective and holistic disaster recovery strategy.


ChatGPT can cough up sensitive information, raises privacy concerns

While catastrophic forgetting is supposed to bury old information as new data is added, researchers from Indiana University (IU) Bloomington have found that memories of these large language models (LLMs) can be jogged, posing privacy risks. According to a New York Times report, graphics editor Jeremy White was informed that his email address was procured via ChatGPT by an IU Ph.D. candidate, Rui Zhu. Zhu and his team were able to obtain White's and those of over 30 NYT employees from GPT-3.5 Turbo, an LLM from OpenAI. ... Speaking to the Daily Mail, AI expert Mike Wooldridge warned that confiding in ChatGPT about personal matters or opinions, such as work grievances or political preferences, could have consequences, reported The Guardian. Sharing private information with the chatbot may be "extremely unwise" as the revealed data contributes to training future versions. Wooldridge emphasizes that users should not expect a balanced response, as the technology tends to "tell you what you want to hear." He also dismissed the idea that AI possesses empathy or sympathy and cautions users that anything shared with ChatGPT may be used in future versions, making retractions nearly impossible.



Quote for the day:

"Knowledge is being aware of what you can do. Wisdom is knowing when not to do it." -- Anonymous