Daily Tech Digest - January 04, 2024

Beyond transactions: Reimagining banking with superior digital customer journeys

In their journey toward digitalization, many banks have concentrated on streamlining customer journeys, often through targeted solutions like automation or AI to address specific problem areas. However, true digital success goes beyond these piecemeal enhancements. For banks to truly excel in the digital arena, a shift in mindset is required. It’s about adopting a platform-first approach that offers a more integrated, holistic solution rather than completely overhauling existing systems. This approach isn’t just about fine-tuning isolated components; it’s about seamlessly connecting these components through a unified platform that enhances the overall efficiency and effectiveness of the banking ecosystem. ... Many banks streamline, but the real leaders in digital banking are those who reimagine the entire customer journey. They take their ‘hands off the wheel,’ designing systems that amplify human potential while supporting touchless operations. They weave AI into the fabric of banking to deliver seamless service that’s not just efficient but also intuitive and connected. 


5 Microservices Design Patterns Every DevOps Team Should Know

The API gateway serves as a single entry point for all client requests. It routes requests to the appropriate microservice and subsequently aggregates the responses. It also handles cross-cutting concerns like authentication, monitoring and rate limiting. Furthermore, it provides a unified API that is easier to consume by the client, shielding them from the complexity of the microservices architecture. However, the API Gateway pattern is not without its challenges. It can become a bottleneck if not properly designed and scaled. Also, it’s a single point of failure unless highly available. Despite these challenges, with careful design choices and good operational practices, the API Gateway pattern can greatly simplify client interaction with microservices. ... The Circuit Breaker pattern aims to prevent this scenario. With the Circuit Breaker pattern, you can prevent a network or service failure from cascading to other services. When a failure is detected, the circuit breaker trips and prevents further calls to the failing service. It then periodically attempts to call the service, and if successful, it closes the circuit and lets the calls go through.


6 warning signs CIOs should look out for in 2024

2023 saw a massive boom in AI, and governments are starting to catch up. In the US, President Biden rolled out an executive order on the safe and secure uses of AI, while in the European Union, lawmakers in December agreed on the details of the AI Act — one of the first bills in the world to establish comprehensive rules for AI. So CIOs will have to follow the debate closely as the year progresses. “Staying updated with new regulations, especially regarding AI ethics, data usage, and copyright concerns, is crucial,” says Bilyk. “Ignoring these changes can lead to legal complications and a loss of public trust.” Companies should make sure they have enough compliance experts, while startups need to hire them early on because they have to understand if and how regulations apply to them. Also, it helps if CIOs know exactly which AI-powered tools their company uses and how their in-house tools are developed. Not knowing this is a serious red flag. “A lot of times, leadership, or the legal side, doesn’t even know what developers are building,” Joseph Thacker, security researcher at AppOmni, told CSO. “I think for small and medium enterprises, it’s going to be pretty tough.”


A Look at Microsoft's Secure Future Initiative

My main gripes with the SFI, and the way it's presented, are twofold. First, Satya is nowhere to be seen. This is not the CEO having a moment of deep realization that the current path is dangerous and the ship needs to steer clear of the icebergs ahead. It's security leaders talking about incremental security improvements. Second, it's marketing word salad (mostly, see more below). It talks in generalities about improvements but not enough about concrete, measurable and transparent details that we can see and use to start rebuilding our trust in Microsoft's security culture. ... In an ever-increasing "speed of feature release" competition with AWS in the IaaS and PaaS cloud spaces, and ditto in SaaS with Google (I think this is a mistake, Google Workspace is no longer a serious competitor to Microsoft 365 except in very small businesses) and to a lesser extent Salesforce and others, will program managers at Microsoft be incentivized to say "No, we can't release this feature now, even though the competition is, because we'll need to spend another two months ironing out the security issues."


Want to Identify Good Generative AI Use Cases? Don’t Be Boring!

When we talk about impact, we should always be thinking about acceleration. Code generation, for instance, is a great example of generative AI’s ability to take complex inputs and produce complex outputs. Writing code, even for master coders, can be a complex and time-consuming task. Great code is also invaluable to organizations and their solutions. With generative AI, an expert coder could prompt a model with something like, “Generate some code that sorts this array here in descending order” and receive a working output almost instantaneously. Sure, they could write that code themselves, but generative AI accelerates their job multiple times over. This is a capability that, until now, was only a dream. Conversational analytics is also another “golden zone” use case because of its accelerative potential. Within business intelligence (BI) and other advanced organizational data/AI tools, you could ask a model something like: “What have our sales been for each of our product groups, and are there any trends?” To do this in code would take a person, team, or department a long time. 


Emotional Cyberrisk Management Decisions

There are certain aspects of managing cyberrisk in which emotion can play an important role. However, far too often, emotion is treated as the default tool in risk management, rather than the specialty tool it is meant to be. Cognitive biases and heuristics have an outsized influence on cyberrisk management. They are hard-wired into our brains and, if managed poorly, can make it difficult to ascertain reality. One of the more common sources of bias is confirmation bias, which is the tendency to search for, interpret, and recall information in a way that confirms one’s preexisting beliefs or hypotheses. Confirmation bias may lead someone to disregard evidence that contradicts their existing beliefs. Overconfidence bias is also common. This leads individuals to overestimate their abilities, such as the effectiveness of their cybersecurity controls. Lastly, the anchoring effect happens when someone relies too heavily on the first piece of information ingested (i.e., the anchor) when making decisions, such as an initial risk assessment that may be outdated.


Why Data Quality Will Rise to the Top of Enterprise Priorities in 2024

Data quality is essential. Even if you have found the data sets that will be appropriate for training an AI model or digging for insights, your results will be poor if the quality of your data is poor. Ownership of data and data quality are core to business success but are still too often ignored or overlooked by the executives and board of directors of most organizations. We can see this in the disconnects between perception and reality. Over 80% of executives that HFS surveyed think they trust their data, but the reality is that there are still many people doing a lot of work to get data quality to a level where the data can be consumed and relied upon. As data quality takes on greater importance, it will escalate to an executive-level conversation. Enterprises will need to start collecting and documenting data, metadata, processes, and business rules as they pursue data quality. Without these basic elements, AI models won’t be able to produce insightful and exact results. If you haven’t yet, you’ll need to invest in initiatives to improve your data quality so you can build a strong foundation for AI use.


The Rise of Linux in Edge Computing and IoT

Edge devices often operate with limited resources. While Linux’s scalability is essentially an asset, striking a balance between functionality and resource usage is important. Careful optimization is essential to ensure efficient utilization of limited resources. ... There is a vast array of edge devices, ranging from sensors and actuators to gateways and edge servers. The diversity can present a challenge in terms of maintenance and optimization. Addressing various architectures and hardware configurations is a time-intensive process that requires continuous effort. ... Decentralization is a common feature of edge computing systems, with a multitude of devices each having unique vulnerabilities. Regularly patching and updating these devices can be challenging, as well as difficult to track. Organizations must be vigilant about maintaining the security integrity of edge computing systems. ... Integrating various devices into a cohesive system poses complex challenges. Developers must navigate the intricacies to ensure interoperability among devices running Linux and other operating systems.


Ransomware Group Steals Australian Courts' Video Recordings

Louise Anderson, chief executive officer of the court, did not reveal the nature of the attack but said the department had been able to isolate the affected network and could confirm that the unauthorized access had been restricted to recordings stored on the network. "We understand this will be unsettling for those who have been part of a hearing. We recognize and apologize for the distress that this may cause people," Anderson said. "Maintaining security for court users is our highest priority. Our current efforts are focused on ensuring our systems are safe and making sure we notify people in hearings where recordings may have been accessed." Anderson did not share the court's efforts to regain access to the compromised recordings but said changes are being made to hearing arrangements and will be announced shortly. Anderson said the hackers had breached a single computer system that manages only audiovisual recordings for all court jurisdictions. "The system holds recordings for around 28 days, so the primary investigation period is Nov. 1 to Dec. 21, which is when we identified the problem and isolated and disabled the affected network," she said.


Align IT With the Business

For most CIOs, redefining IT as a strategic discipline and getting the rest of the organization to recognize this isn’t an overnight process. This is also an area where the CIO has to take the lead. Taking the lead means changing perceptions of IT by altering IT practices so IT better aligns with the business. ... Those CIOs regularly visit with these managers to understand business goals and pain points, and then strategize on how IT solutions can help the managers (and the business) succeed. As part of this process, more CIOs are placing IT personnel such as business analysts directly into business units, or they’re teaming their own IT analysts with citizen developers in user departments. This creates inter-departmental cooperation, and it is a natural onramp for relationship building with other business managers. ... Being transformative in your company can be risky and isn’t for the faint of heart, but there are CIOs who have understood their companies’ businesses and have come up with transformative solutions that have grown out of IT. These breakthroughs have reinvented their companies. I



Quote for the day:

"Make heroes out of the employees who personify what you want to see in the organization." -- Anita Roddick

Daily Tech Digest - January 03, 2024

5 best practices for digital twin implementation

Rather than wait until post-build, consider initiating digital twins during the planning, design, and construction phases of your projects. At the planning stage, this can enable plan simulation and various what-if scenario testing prior to committing to real-world investment. Part of the benefit of digital twins is they can address the full lifecycle from construction twins to operational twins. The digital twins, therefore, know far more than after-the-fact asset management systems, and the learnings and insights captured by the twin during design and build can improve operations and maintenance. According to Rapos, early incorporation allows for better data collection, more accurate modeling, and immediate feedback during the construction or development phase. It’s crucial to understand that digital twins aren’t just a final product, but a dynamic tool that evolves and adds value throughout the project’s life. Delaying its development can result in missed opportunities for optimization and innovation.


Why exit the cloud? 37signals explains

37signals was a significant cloud user with a $3.2 million cloud budget for 2022. The company pledged $600,000 to procure Dell servers, envisioning significant savings during the next five years. Of course, there were questions, and Hansson did an excellent job of addressing them one by one in the FAQ, such as the additional costs in terms of humans needed to run the on-premises systems, how optimization only took them so far in the cloud, and how they handled security requirements. Hansson also explained the limited abilities of cloud-native applications to reduce costs and highlighted the need for a world-class team to address security concerns, which the company has. Notably, privacy regulations and GDPR compliance were underscored as reasons for European companies to opt for self-owned hardware as opposed to relying on the cloud. Of course, this is not the case for everyone. ... Everyone is looking for a single answer, and it doesn’t exist. The requirements of your systems will dictate what platform you should use—not whatever seems trendy. Sometimes the cloud provides the most value, but not always.


Size doesn’t matter!

Small enterprises are less likely to have dedicated IT staff, let alone afford cyber security specialists. Security solutions are usually considered too expensive(Chidukwani 2022) and their technical features come across as overwhelmingly complex to be handled in-house. As a consequence, there is a tendency to rely heavily on external IT vendors that provide sub-optimal support without customized care(Benz 2020). Fear-driven, some business owners take up the reactive route. Instead of a unified threat solution, they continue to buy off-the-shelf security products in response to recent emerging threats, leaving may leakages unplugged and ineffective protection. These human, financial, and technical resource constraints create a puzzling gap between the cyber security awareness of small business leaders and their commensurate commitment to address the risk. Alongside the well-known construct of the ‘digital divide”, academic literature now also acknowledges a ‘security divide’, what with lagging investments in cybersecurity solutions coupled with increasing cyber incidents at SMEs (Heidt et al., 2019).


Cybersecurity challenges emerge in the wake of API expansion

APIs are already the fundamental building blocks of any modern organization today, and that will become even more evident going forward. As organizations look to transform their digital business and enter the era of the API economy, we expect that we will be building and using more and more APIs. That’s especially true if we take a look at some of the trends that are happening in technology nowadays. Things like VR/AR glasses, wearable devices, and voice-controlled devices all require APIs to work. APIs will play a more critical role as the world transitions to more browserless devices. All this growth and expansion means more APIs, requests, and security challenges. The toughest thing about API security is that, in most cases, organizations don’t know that hackers exploit their APIs because they don’t have access to API data in real-time. That’s why tooling, which allows you to do that, will become even more critical.


Attackers Abuse Google OAuth Endpoint to Hijack User Sessions

OAuth enables applications to get access to data and resources to other trusted online services and sites based on permissions set by a user, and it is the mechanism responsible for the authentication handoff between the sites. While the standard is certainly useful, it also presents risk to organizations if it's not implemented correctly, and there are a number of ways attackers can abuse vulnerable instances and the standard itself. For example, security researchers have found flaws in its implementation that have exposed key online services platforms such as Booking.com and others to attack. Meanwhile, others have used malicious OAuth apps of their creation to compromise Microsoft Exchange servers. In the case of the Google endpoint, the OAuth exploit discovered by Prisma targets Google Chrome's token_service table to extract tokens and account IDs of logged-in Chrome profiles, according to CloudSEK. That table contains two "crucial" columns, titled "service (GAIA ID)" and "encrypted_token," Karthick M explained.


Observability in 2024: More OpenTelemetry, Less Confusion

Observability has transcended its traditional association with monitoring to find bugs and to resolve outages, and now extends its influence across different interfaces, tools, and demonstrating enhanced openness and compatibility to increasingly make forecasts. These frecasts can involve predicting outages before they happen, cost shifts, resources usage and other variables that certainly would be much harder and mostly involve trial and error previously. ...  “This means that organizations can now use a single agent to collect observability data across their increasingly distributed and therefore complex universe of microservices applications,” “This could significantly simplify one of today’s most significant pain points in observability: instrumentation. Developers can now benefit from the continuously increasing auto-instrumentation capabilities of OpenTelemetry and no longer have to worry about instrumenting their code for specific observability platforms,” Volk said. However, such a freedom of choice due to a proliferation of tools has created challenges of its own.


IT’s Key Role in Planting ESG Effort

The one thing we know about all compliance measures is that they require new levels of integration that the company usually lacks. If you can focus on integration work now, you will be more agile-and better prepared for ESG regs when they hit. Keep your ears to the ground - You can learn a lot about the directions ESG is taking from your outside audit firms, regulators and your internal legal or regulatory department. These entities already have information in advance on future ESG directions and what laws or regulations are likely to be forthcoming. Do your part internally - Several years ago, I was visiting with the CIO of a large healthcare company in the Northeast. He told me that the company wanted to trim its carbon footprint and that the first place the company looked for tangible results was in the data center. “This prompted us to move more IT to the cloud, and even to build a new, eco-friendly data center,” he said. “We virtualized servers as much as possible, reduced energy consumption, mandated that all new equipment we purchased used less power, and even redid the HVAC unit airflows.”


Why 2024 will be the year of ‘augmented mentality’

With this AI technology now available for consumer use, companies are rushing to build them into systems that can guide you through your daily interactions. This means putting a camera, microphone and motion sensors on your body in a way that can feed the AI model and allow it to provide context-aware assistance throughout your life. The most natural place to put these sensors is in glasses, because that ensures cameras are looking in the direction of a person’s gaze. Stereo microphones on eyewear (or earbuds) can also capture the soundscape with spatial fidelity, allowing the AI to know the direction that sounds are coming from — like barking dogs, honking cars and crying kids. In my opinion, the company that is currently leading the way to products in this space is Meta. Two months ago they began selling a new version of their Ray-Ban smart glasses that was configured to support advanced AI models. The big question I’ve been tracking is when they would roll out the software needed to provide context-aware AI assistance.


Google flaunts concurrency, optimization as cloud rivals overhaul platforms

Kazmaier explains that Google’s approach to concurrency avoids spinning up more virtual machines and instead improves performance on a sub-CPU level unit. “It moves these capacity units seamlessly around, so you may have a query which is finishing and freeing up resources, which can be moved immediately to another query which can benefit from acceleration. All of that micro-optimization takes place without the system sizing up. It's constantly giving you the ideal projection of the capacity you use on the workloads you run,” he says. A paper from Gartner earlier last year approved of the approach. "A mix of on-demand and flat-rate pricing slot reservation models provides the means to allocate capacity across the organization. Based on the model used, slot resources are allocated to submitted queries. Where slot demand exceeds current availability, additional slots are queued and held for processing once capacity is available. This processing model allows for continued processing of concurrent large query workloads," it says.


As AI Advances, Who Is Looking to Its Architecture?

There is a case to be made, though, that enterprise architects have a much more fundamental role to play in our current phase of technological evolution than simply implementing its advancements into our workflows. AI solutions must seek to enhance the role of the enterprise architecture and their productivity, not attempt to supplant it. Standards are important not just because they enable collaboration, but because they build consensus. A successful standard draws on the insights and expertise of the whole community of practitioners which needs to use it. In that process, many conversations are had – and occasionally quite fraught ones – in the interest of finding a common understanding of what a good, mature, responsible, successful approach looks like. One that puts the human at the center of the decision loop. The point of listing so many of AI’s potential positive outcomes earlier in this article was not just to emphasize how dramatic and wide-ranging its impact could be. 



Quote for the day:

"People often say that motivation doesn't last. Well, neither does bathing - that's why we recommend it daily." -- Zig Ziglar

Daily Tech Digest - January 02, 2024

Decoding the Black Box of AI – Scientists Uncover Unexpected Results

“If the GNNs do what they are expected to, they need to learn the interactions between the compound and target protein and the predictions should be determined by prioritizing specific interactions,” explains Prof. Bajorath. According to the research team’s analyses, however, the six GNNs essentially failed to do so. Most GNNs only learned a few protein-drug interactions and mainly focused on the ligands. Bajorath: “To predict the binding strength of a molecule to a target protein, the models mainly ‘remembered’ chemically similar molecules that they encountered during training and their binding data, regardless of the target protein. These learned chemical similarities then essentially determined the predictions.” According to the scientists, this is largely reminiscent of the “Clever Hans effect”. This effect refers to a horse that could apparently count. How often Hans tapped his hoof was supposed to indicate the result of a calculation. As it turned out later, however, the horse was not able to calculate at all, but deduced expected results from nuances in the facial expressions and gestures of his companion.


Why 2024 is the year for IT managers to revamp their green IT plans

Conversations with enterprise operators reveal that many do not consolidate applications when they refresh their IT equipment and are hesitant to deploy power-aware workload management tools out of concern for impacting the reliability of their IT operations. IT managers must guide their organisations to intelligently utilise their available equipment capacity, using software tools to measure, manage, and maximise utilisation within reliability constraints. All organizations should set equipment utilisation goals and build multi-year efficiency project plans to improve IT infrastructure energy performance. Data available from IT equipment manufacturers indicate that workloads on two to eight old servers can be migrated to one new server when deploying n+1 (e.g., Intel or AMD CPU generation 3 to 4) or n+2 (e.g., Intel or AMD CPU generation 2 to 4) technology. Similar improvements can be achieved in storage and network equipment. Consolidating CPU workload and storage and network operations delivers a defined workload on three-quarters to one-half of the equipment. 


AI Everywhere, All the Time: Top Developments of 2023

AI-powered robots are increasingly automating tasks in manufacturing and logistics, among other industries, driving efficiency and changing the nature of work. Some major milestones include Tesla's Optimus Bot prototype demonstrating dexterous and adaptable humanoid robots, which could shape future automation solutions. Separately, Boston Dynamics' Atlas showcased its parkour skills, paving the way for applications in search and rescue or disaster response. The AlphaFold 2 AI system, developed by Alphabet subsidiary DeepMind, can perform predictions of protein structure, and stands to revolutionize drug discovery and personalized medicine, carrying the potential for helping mitigate numerous diseases. Robotic surgery systems grow ever more sophisticated, while AI-powered prosthetics offer amputees greater control and functionality. AI algorithms are already assisting doctors in medical diagnosis for diseases such as cancer, offering increased accuracy and early detection possibilities. 


How Gamification Can Help Your Business

At work, gamification is often used to build employee experience by promoting fun competition and immersive learning experiences, leading to better information retention and a heightened incentive to engage in ongoing learning and upskilling, Ringman says. Gamification is frequently used to boost staff productivity. “In any business, there are many things that need to be done every day that many of us aren’t naturally motivated to do,” Avila observes. Gamification, provides helpful context, guidance, and rewards, allowing tasks to be completed faster and more efficiently while improving focus. “This, in turn, helps the company achieve larger business goals.” Brands can also tap into gamification as they strive to engage customers and transform ordinary interactions into memorable experiences. Ringman notes that brands can use gamification to add extra fun to loyalty programs by hosting contests and competitions, as well as awarding virtual badges and trophies to customers as they complete various actions or pass significant milestones.


Envisioning a great future – India as a SuperPower

A nation’s growth is underpinned with technological advancement and how swiftly it adopts tech. During the recent state visit of India’s Prime Minister, Narendra Modiji to the United States, he put a lot of emphasis on growing technology that will revolutionize various industries. India is fast moving towards digitization. The thrust from the Government of India with the Digital India initiative and the growing use of digital technology such as Artificial Intelligence, Machine Learning and Data Analytics across various private organisations is bringing a phenomenal shift in India’s growth and development. Secondly, there is going be a lot of disruption in the way we work. With AI, lots of work will be done by BOTS, so it is important to have highly skilled labor to manage the AI which will also require upscaling the work, as we will have more leisure. The way society works will change and need to be adaptable. AI tools can be used as an Add-on tool to enable our lawyers, CAs, economists and leaders at large. Today, India is a force to be reckoned with in the domain of Information Technology without an iota of doubt.


Wi-Fi 7’s mission-critical role in enterprise, industrial networking

Wi-Fi 7 devices can use multi-link operation (MLO) in the 2.4 GHz, 5 GHz, and 6 GHz bands to increase throughput by aggregating multiple links or to quickly move critical applications to the optimal band using seamless switching between links. Fast link switching allows Wi-Fi 7 devices to avoid interference and access Wi-Fi channels without delaying critical traffic. This and other new features also make Wi-Fi 7 ideal for immersive XR/AR/VR, online gaming and other consumer applications that require high throughput, low latency, minimal jitter, and high reliability. ... Naturally there are challenges with achieving seamless connectivity between 5G & Wi-Fi. A lot of industry alignment is needed to enable frictionless movement between networks, across technologies, vendors, and areas such as authentication, QoS, QoE and security. The Wireless Broadband Alliance is playing a key role in bringing all the stakeholders (operators, enterprises and network owners) together to ensure collaboration and alignment on the frameworks that will deliver seamless connectivity.


Data Center Governance Trends to Watch in 2024

Historically, data center governance did not drive frequent conversations in the data center industry. Data center operators sometimes talked about it, but it has not tended to be a core area of concern – perhaps because, unlike other types of governance, data center governance isn't a requirement for businesses seeking to meet regulatory rules or avoid compliance fines. Looking ahead, however, governance in data centers is likely to become a more common item of discussion. Data centers have now matured to the point that businesses are increasingly keen to squeeze as much efficiency as possible out of them. In the past, disorganized data center assets or lack of optimal server room layouts may not have been critical. But today, data center operators face growing pressure to maximize the efficiency of their facilities. Certain regulators are now requiring disclosures about data center emissions, for example, meaning that increasing energy efficiency through effective governance practice has become important for protecting business's brands and reputations.


Essential skills for today’s threat analysts

Very often, for instance, there's an urgent need to communicate a new vulnerability to different audiences, which demands tailored communications for technical teams, CISOs, and board members. Williams highlights task management and patience, especially when dealing with uncertain or misleading information, and above all, coordinating between different sources of information. "So much of threat hunting today relates to that living off the land kind of thing where you're seeing things that look malicious. And so oftentimes you’re developing hypotheses and that involves consulting system admin and working toward a resolution," says Williams. ... It's also a mind game, with threat hunters needing to be highly adaptable as threats are changing daily, sometimes hourly. "You need to change with them. Never allow an inflexible mind to pervade your operational approach," says Brian Hussey, VP of threat hunting, intelligence and DFIR at SentinelOne. At the same time, you also need to see the forest through the trees. "Often threat actors introduce surface changes to their attack patterns, but core modus operandi remains unchanged, leaving important opportunities to identify and eliminate new attacks, even before they arrive," Hussey tells CSO.


Want to tackle technical debt? Sell it as business risk

There is no magic potion that can eliminate all technical debt, but technical debt can be attacked via budgeting if technical debt is not just perceived as upgrading IT infrastructure. What CIOs need to do instead is to present IT infrastructure investment as an important corporate financial and risk management issue that the business can’t afford to ignore. ... Technical budget justifications for IT infrastructure upgrades, which are seldom linked to end business strategies, make it easy for budget decision-makers to defer IT infrastructure investment. Instead, budget decision-makers figure that the company can “make do” because IT will somehow find a way to keep systems running. CIOs must change this thinking. They can start the process by changing IT infrastructure investment justifications from technical explanations to corporate financial and risk management explanations. ... CIOs should also team with the CFO to help reframe the tech debt narrative, because CFOs are always on the lookout for new corporate financial and risk management scenarios. 


Leveraging Leadership: The Fourfold Path to Business Control

Belief systems function as a mechanism for communicating the core values, objectives, and mission of the organization, thus providing guidance and motivation to staff members. By encouraging people to improve their customer service through the inculcation of positive values, conduct, performance, and a feeling of inclusion, this lever ensures the fulfillment of the organization's objectives. In the absence of a clearly-defined Belief System, employees are forced to depend on conjecture regarding the organization's intended behaviors and objectives. ... Without stifling individuals' capacity for innovation or entrepreneurship, this control mechanism permits the development of policies and standards that instruct individuals on bad behavior. Boundary systems implement regulations, codes of conduct, and premeditated strategic boundaries to delineate acceptable and abhorrent employee conduct, thereby establishing governing parameters. These boundaries clearly define the irreversible consequences of violating ethical principles and the potential outcomes that should be avoided. 



Quote for the day:

"It is better to fail in originality than to succeed in imitation." -- Herman Melville

Daily Tech Digest - January 01, 2024

4 key devsecops skills for the generative AI era

CIOs and IT leaders must prepare their teams and employees for this paradigm shift and how generative AI impacts digital transformation priorities. Nicole Helmer, VP of development and customer success learning at SAP, says training must be a priority. “Companies should prioritize training for developers, and the critical factor in increasing adaptability is to create space for developers to learn, explore, and get hands-on experience with these new AI technologies,” she says. The shift may be profound and tactical as more IT automation becomes productized, enabling IT to shift to more innovation, architecture, and security responsibilities. “In light of generative AI, devops teams should deprioritize basic scripting skills for infrastructure provisioning and configuration, low-level monitoring configurations and metrics tracking, and test automation, says Dr. Harrick Vin, chief technology officer of TCS. “Instead, they should focus more on product requirements analysis, acceptance criteria definition, software, and architectural design, all of which require critical thinking, design, strategic goal setting, and creative problem-solving skills.”


Don't neglect API functional testing

The first step to building a successful API functional testing strategy is to understand each API, its functions and its requirements. API requirements are often found within API documentation, but specific and necessary details are sometimes omitted. Work with the API developers to ensure documentation includes the expected behavior under all scenarios, error conditions and status codes, the API's purpose and objective, and how the API affects the application workflow. As a QA tester responsible for functional API testing, create a test plan and approach. Next, select an API testing tool that enables testers to create and execute both automated and manual tests. Many existing QA and developer tools include an option for API testing. Check the capabilities of your existing tools before adding another. Next, create a test plan, and develop test cases. Once the test cases are created, organize them into all working combinations. One option is to create tests and then execute them. Or, within many tools, testers can quickly test as they go. In other words, you can be testing each request as you develop the test.
Decentralization stands as one of the most profound principles championed by blockchain. While the term often evokes images of intricate algorithms and cryptographic nodes, its implications on leadership and organizational structuring are profound. At its core, decentralization heralds a departure from the age-old top-down management models. Consider the rise of decentralized finance (DeFi) platforms, which are disrupting traditional banking systems. Instead of a centralized authority making decisions, these platforms empower their users through consensus mechanisms and democratized governance. Compound, a leading DeFi platform, is a testament to this. It operates with a decentralized governance model where token holders propose, discuss, and implement changes to the platform. This not only ensures transparency, but also inculcates a deep sense of ownership among its participants. This decentralization isn't just confined to the crypto realm. Businesses are realizing the value of distributed decision-making. For instance, the Spotify model of team organization, where squads, tribes, chapters, and guilds collaborate across functions, exemplifies a shift from rigid hierarchies to fluid, decentralized structures. 


Shaping finance through technological prowess

In the present scenario, technology stands as the cornerstone of well-informed decision-making for CFOs. The integration of data analytics and artificial intelligence can equip CFOs with robust tools to dissect vast data sets, enabling them to make precise predictions and optimise resource allocation. For instance, predictive analytics has emerged as a powerful instrument that can enable CFOs to anticipate market trends and customer behaviour, thereby guiding financial strategies with unprecedented precision. Consider a scenario where a CFO of a manufacturing company leverages data analytics to optimise inventory management. By analysing historical sales data, production rates, and external market factors, the CFO can use tools to predict demand fluctuations and adjust inventory levels accordingly. This approach may not only minimise excess inventory costs but also ensure that the company is well-prepared to meet customer demands swiftly. The financial decision-making process has transitioned from a reactive stance to one driven by data-driven insights, propelling the company toward financial agility.


Soon, every employee will be both AI builder and AI consumer

The time could be ripe for a blurring of the lines between developers and end-users, a recent report out of Deloitte suggests. It makes more business sense to focus on bringing in citizen developers for ground-level programming, versus seeking superstar software engineers, the report's authors argue, or -- as they put it -- "instead of transforming from a 1x to a 10x engineer, employees outside the tech division could be going from zero to one." ... Automated platforms and generative AI -- leveraged within an open and supportive corporate culture -- may amplify many human skills, they continue. "10x engineers could become much less rare. Especially as generative AI continues to bolster developer productivity and opens up a future of increased workplace automation, many of today's hindrances may not be relevant in the next five to 10 years." It's all about fostering a superior "developer experience," not just within IT shops, but across the enterprise as well. "As technology itself continues to become more and more central to the business, technology tasks and required talent will likely become central as well. ..."


How CTOs can win over the board room

Now is the time for engineering leaders to showcase engineering’s value. There’s not a single business that hasn’t been impacted by resource tightening over the last year. While CFOs are increasingly focusing on cost optimization within their businesses, they continue to prioritize growth, according to a survey by Gartner. Engineering leaders must show how they’re driving this growth. Engineering leaders who couldn’t clearly show major business impact were the first to see cuts during 2022 recession concerns. While the rest were forced to “do more with less,” they were at least able to sustain critical projects and fight for their headcount. Why? Because they clearly communicated the importance of specific investments and projects to the business’s success. No one can argue the last year has been easy for leaders across the board. But I believe good is coming from these challenges. It has forced engineering leaders to scrutinize their investments and allowed them to identify their most critical assets, enabling them to innovate even during economic uncertainty.


Data Privacy Paradox: Balancing Innovation with Protection in the Age of AI

While AI’s potential for progress shines bright, its foundation rests upon a vast ocean of personal data – our online activity, location trails, and even social media whispers. This dependence raises a chilling specter: data surveillance. The specter of governments and corporations peering over our digital shoulders, gleaning insights into our lives, fuels fears of mass surveillance and the potential misuse of this sensitive information. This specter chills not only with its invasive nature, but also with its chilling implications for individual freedoms and potential abuses of power. But the concerns go beyond the watchful eye of Big Brother. AI’s algorithms, trained on vast datasets, can become unwitting vessels of algorithmic bias. Imagine a credit scoring system fueled by biased data, unfairly disadvantaged certain demographics. Or a criminal justice system where AI-powered predictions exacerbate existing prejudices. These are not dystopian nightmares; they are real possibilities if we fail to address the inherent biases that can creep into the heart of AI. Furthermore, the inner workings of these algorithms often remain shrouded in a veil of secrecy.


Why You Are So Resistant to Change — And How to Overcome It

As an entrepreneur, your ability to change and adapt is arguably the single most important contributor to long-term success. Stagnant businesses simply can't flourish, grow or (like those heart patients unwilling to modify their habits) survive. Ask yourself, how receptive are you to transformation in yourself, your processes, and your entire organization? Now is the time to evolve as a business owner. Start with an unwavering desire for continuous improvement. The next step is finding that emotional connection and the people or groups who can support you on your journey of change. For business leaders, these relationships are often found outside of one's own company in the form of peer advisory boards or mastermind groups. Peer advisory boards provide business owners with the requisite support and emotional connection that act as catalysts for forward progress and even innovation. As the president and CEO of such an organization, I get to witness the transformative power of connection all the time. It is truly amazing to see what can happen between owners and executives who care about each other's welfare and respect, support and elevate each other on their paths to transformation.


Open Source in 2024: More Volatility, More Risk, More AI

But there’s plenty in the way of increased international cooperation around tech – or indeed, international cooperation around anything. To paraphrase a former British prime minister, the greatest challenge for a leader is, “Events, dear boy. Events” If the last three years have been event-packed, 2024 will be equally so, not least because of an unprecedented number of elections due, including the U.S. presidential race. These elections become cybersecurity incidents themselves. But they could also herald and shape further regulation and legislation that could directly affect the open source world Both the U.S. and E.U. have been putting in place legislation and regulation around AI, but it is 2024 that will see how these efforts start playing out in the real (virtual) world. The European Union’s Cyber Resiliency Act will also come into effect in 2024. Recently announced revisions have reportedly made it less overtly problematic for open source, but the final text is yet to be released. At the same time, the U.S. has already been turning the technology screws on China and Russia, choking off exports of GPUs to the former, for example, and enforcing wide-ranging sanctions on the latter.


Infrastructure, Operations Leaders Must Focus on DevOps, SRE Initiatives

Rajesh Ganesan, president of IT ManageEngine, notes data breaches and data privacy law violations can do irreparable damage to an organization's reputation. “By making privacy and data governance a top priority in 2024, I&O leaders can ensure their organizations are compliant with privacy laws and protected against data breaches,” he explained in an email interview. “It's crucial that every employee in the organization takes personal responsibility for data privacy.” Ganesan points out if organizations have the financial means, it is wise to invest in private data centers. “Organizations that invest in their own domain controller and security operations can control their security posture and make sure poor levels of security from the public service provider does not affect them,” he says. Not only are these companies protected from any breaches that occur in a public cloud environment, but they also have an easier time complying with legislation, as specific control measures can be put in place.



Quote for the day:

"Don't judge each day by the harvest you reap but by the seeds that you plant." -- Robert Louis Stevenson

Daily Tech Digest - December 29, 2023

5 Ways That AI Is Set To Transform Cybersecurity

Cybersecurity has long been notoriously siloed, with organizations installing many different tools and products, often poorly interconnected. No matter how hard vendors and organizations work to integrate tools, coalescing all relevant cybersecurity information into one place remains a big challenge. But AI offers a way to combine multiple data sets from many disparate sources and provide a truly unified view of an organization’s security posture, with actionable insights. And with generative AI, gaining those insights is so easy, a matter of simply asking the system questions such as “What are the top three things I could do today to reduce risk?” or “What would be the best way to respond to this incident report?” AI has the potential to consolidate security feeds in a way the industry has never been able to quite figure out. Generative AI will blow up the very nature of data infrastructure. Think about it: All the different tools that organizations use to store and manage data are built for humans. Essentially, they’re designed to segment information and put it in various electronic boxes for people to retrieve later. It’s a model based on how the human mind works.


Microservices Resilient Testing Framework

Resilience in microservices refers to the system's ability to handle and recover from failures, continue operating under adverse conditions, and maintain functionality despite challenges like network latency, high traffic, or the failure of individual service components. Microservices architectures are distributed by nature, often involving multiple, loosely coupled services that communicate over a network. This distribution often increases the system's exposure to potential points of failure, making resilience a critical factor. A resilient microservices system can gracefully handle partial failures, prevent them from cascading through the system, and ensure overall system stability and reliability. For resilience, it is important to think in terms of positive and negative testing scenarios. The right combination of positive and negative testing plays a crucial role in achieving this resilience, allowing teams to anticipate and prepare for a range of scenarios and maintaining a robust, stable, and trustworthy system. For this reason, the rest of the article will be focusing on negative and positive scenarios for all our testing activities.


Skynet Ahoy? What to Expect for Next-Gen AI Security Risks

From a cyberattack perspective, threat actors already have found myriad ways to weaponize ChatGPT and other AI systems. One way has been to use the models to create sophisticated business email compromise (BEC) and other phishing attacks, which require the creation of socially engineered, personalized messages designed for success. "With malware, ChatGPT enables cybercriminals to make infinite code variations to stay one step ahead of the malware detection engines," Harr says. AI hallucinations also pose a significant security threat and allow malicious actors to arm LLM-based technology like ChatGPT in a unique way. An AI hallucination is a plausible response by the AI that's insufficient, biased, or flat-out not true. "Fictional or other unwanted responses can steer organizations into faulty decision-making, processes, and misleading communications," warns Avivah Litan, a Gartner vice president. Threat actors also can use these hallucinations to poison LLMs and "generate specific misinformation in response to a question," observes Michael Rinehart, vice president of AI at data security provider Securiti. 


Cybersecurity teams need new skills even as they struggle to manage legacy systems

To stay ahead, though, security leaders should incorporate prompt engineering training for their team, so they can better understand how generative AI prompts function, the analyst said. She also underscored the need for penetration testers and red teams to include prompt-driven engagements in their assessment of solutions powered by generative AI and large language models. They need to develop offensive AI security skills to ensure models are not tainted or stolen by cybercriminals seeking intellectual property. They also have to ensure sensitive data used to train these models are not exposed or leaked, she said. In addition to the ability to write more convincing phishing email, generative AI tools can be manipulated to write malware despite limitations put in place to prevent this, noted Jeremy Pizzala, EY's Asia-Pacific cybersecurity consulting leader. He noted that researchers, including himself, have been able to circumvent ethical restrictions that guide platforms such as ChatGPT and prompt them to write malware.


The relationship between cloud FinOps and security

Established FinOps and cybersecurity teams should annually evaluate their working relationship as part of continuous improvement. This collaboration helps ensure that, as practices and tools evolve, the correct FinOps data is available to cybersecurity teams as part of their monitoring, incident response and post-incident forensics. The FinOps Foundation doesn't mention cybersecurity in its FinOps Maturity Model. But, in all rights, FinOps and cybersecurity collaboration indicates a maturing organization in the model's Run phase. Ideally, moves to establish such collaboration should show themselves in the Walk stage. ... Building a relationship between the FinOps and cybersecurity teams should start early when an organization chooses a FinOps tool. A FinOps team can better forecast expenses, plan budget allocation and avoid unnecessary costs by understanding security requirements and constraints. These forecasts result in a more cost-effective and financially efficient cloud operation, so plan for some level of cross-training between the teams.


What is GRC? The rising importance of governance, risk, and compliance

Like other parts of enterprise operations, GRC comprises a mix of people, process, and technology. To implement an effective GRC program, enterprise leaders must first understand their business, its mission, and its objectives, according to Ameet Jugnauth, the ISACA London Chapter board vice president and a member of the ISACA Emerging Trends Working Group. Executives then must identify the legal and regulatory requirements the organization must meet and establish the organization’s risk profile based on the environment in which it operates, he says. “Understand the business, your business environment (internal and external), your risk appetite, and what the government wants you to achieve. That all sets your GRC,” he adds. The roles that lead these activities vary from one organization to the next. Midsize to large organizations typically have C-level executives — namely a chief governance officer, chief risk officer, and chief compliance officer — to oversee these tasks, McKee says. These executive lead risk or compliance departments with dedicated teams.


Revolutionising Fraud Detection: The Role of AI in Safeguarding Financial Systems

Conventional fraud detection methods, primarily rule-based systems, and human analysis, have proven increasingly inadequate in the face of evolving fraud tactics. Rule-based systems, while effective in identifying simple patterns, often struggle to adapt to the ever-changing landscape of fraud. Fraudsters have stronger motivation and they evolve faster than the rules in the rules engine. ... The same volumes of data that are overwhelming for traditional fraud detection systems are fuel for AI. With its ability to learn from vast amounts of data and identify complex patterns, AI is poised to revolutionize the fight against fraud. ... While AI offers immense potential, it’s crucial to acknowledge the challenges associated with its adoption. Data privacy concerns, ethical considerations around algorithmic bias, and the need for robust security measures are all critical aspects that demand careful attention. As AI opens new frontiers in fraud prevention, unregulated AI technology such as deepfake in the wrong hands could also enable sophisticated impersonation scams. However, the benefits of AI far outweigh the challenges. 


API security in 2024: Predictions and trends

The rapid rate of change of APIs means organizations will always have vulnerabilities that need to be remediated. As a result, 2024 will usher in a new era where visibility will be a priority for API security strategies. Preventing attackers from entering the perimeter is not a 100% foolproof strategy. Whereas having real-time visibility into a security environment will enable rapid responses from security teams that neutralize threats before they impact operations or extract valuable data. ... With the widespread use of APIs, especially in sectors such as financial services, regulators are looking to encourage transparency in APIs. This means data privacy concerns and regulations will continue to impact API use in 2024. In response, organizations are becoming weary of having third parties hold and access their data to conduct security analyses. We expect to see a shift in 2024 where organizations will demand running security solutions locally within their own environments. Self-managed solutions (either on-premise or private cloud), eliminate the need to filter, redact, and anonymize data before it’s stored.


The Terrapin Attack: A New Threat to SSH Integrity

Microsoft’s logic is that the impact on Win32-OpenSSH is limited This is a major mistake. Microsoft’s decision allows unknown server-side implementation bugs to remain exploitable in a Terrapin-like attack, even if the server got patched to support “strict kex.” As one Windows user noted, “This puts Microsoft customers at risk of avoidable Terrapin-style attacks targeting implementation flaws of the server.” Exactly so. You see, for this protection to be effective, both client and server must be patched. If one or the other is vulnerable, the entire connection can still be attacked. So to be safe, you must patch and update both your client and server SSH software. So, if you’re Windows and you haven’t manually updated your workstations, their connections are open to attack. While patches and updates are being released, the widespread nature of this vulnerability means that it will take time for all clients and servers to be updated. Because you must already have an MITM attacker in place to be vulnerable, I wouldn’t go spend the holiday season worrying myself sick. I mean, you’re sure you don’t already have a hacker inside your system, right? Right!?


Supporting Privacy, Security and Digital Trust Through Effective Enterprise Data Management Programs

Those professionals responsible for supporting privacy efforts should therefore prioritize effective enterprise data management because it is integral to safeguarding individual’s privacy. A well-structured data management framework works to ensure that personal information is handled ethically and compliant with regulations, while fostering a culture of responsible data stewardship within organizations. When done right, this reinforces trust with stakeholders, serves as a differentiator in the marketplace, improves visibility into data ecosystems, expands reliability of data, and optimizes scalability and innovative go to market efforts. ... Most, if not all, of the global data privacy laws and regulations require data to be managed effectively. To comply with these laws and regulations, organizations must first understand the data they collect, the purposes for its collection, how it is used, how it is shared, how it is stored, how it is destroyed, and so on. Only after organizations have a full understanding of their data ecosystem can they begin to implement effective controls to both protect data and preserve the ability of the data to achieve intended operational goals.



Quote for the day:

"Too many of us are not living our dreams because we are living our fears." -- Les Brown