Daily Tech Digest - April 07, 2024

AI advancements are fueling cloud infrastructure spending

The IDC report offers insights into the evolving landscape of cloud deployment infrastructure spending, explicitly focusing on AI. I’m not sure that anyone will push back on that. However, there are some other market dynamics that we should be paying attention to, namely:Tech leaders’ rapid deployment of AI capabilities is changing infrastructure requirements, emphasizing the need for specialized, high-performance hardware. However, this will likely translate quickly into storage and databases, which are more critical to AI than processing. Who would have thunk? The shift towards GPU-heavy servers at higher price points but fewer units sold reflects the evolving market dynamics influenced by the priorities of cloud providers and enterprise tech behemoths. As I pointed out, this could be a false objective that leads many, including the cloud providers, down the wrong path. ... The significant uptick in cloud infrastructure spending underscores a robust investment in AI-related capabilities, which has far-reaching implications for technology and business landscapes.


How to develop your skillset for the AI era

Grounded in a rich understanding of the broader context and enhanced by a diverse skill set, building specialization will ensure that engineers can bring unique insights, creativity, and solutions that AI cannot. It's the intersection of depth and breadth in an engineer's expertise that will define their irreplaceability in an AI-driven world. This is where Roger Martin's Doctrine of Relentless Utility comes into play, a career strategy that focuses on finding your niche and monopolizing it. As you become more adept at navigating between different roles and perspectives, you'll be better positioned to uncover unique opportunities where your particular blend of skills and interests intersect with unmet needs within your team or organization. Aligning what you're good at with areas where you can make a significant impact allows you to establish a distinctive role that plays to your strengths and passions. This strategy promotes an active, value-driven approach, looking for ways to contribute beyond the usual scope of your role. Your niche could be bridging the gap between advanced technical knowledge and non-technical stakeholders or clients.


A phish by any other name should still not be clicked

The proper way for enterprises to reach out on these matters is something like, “There is a new billing matter that requires your attention. Please log into your portal and look into it.” Why don’t most enterprises do that? Some blame a lack of training — and there is absolutely a lot of truth in that. But, it’s often quite deliberate and intentional. More responsible enterprises have tried doing this the proper way, but too many customers complained along the lines of, “Do you know how many portals I have to deal with? Give me a link to the portal you want me to use.” ... This gets us right back to the security-vs.-convenience nightmare. This problem is complicated because the situation is two-step. It’s not that the customer will be hurt if they click on your link. It’s that you’re inadvertently making them comfortable with clicking on an unknown link and they might get hurt two days from now when they encounter an actual phishing attack email. Will the enterprise be held liable, especially if you can’t prove the victim clicked because of what was sent? It gets even worse. The old advice used to be to mouseover suspicious links and make sure they’re legitimate. Today, that advice doesn’t work. 


How to keep humans in charge of AI

First, let users choose guardrails through the marketplace. We should encourage a large multiplicity of fine-tuned models. Different users, journalists, religious groups, civil organizations, governments and anyone else who wants to should be able to easily create customized versions of open-source base models that reflect their values and add their own preferred guardrails. Users should then be free to choose their preferred version of the model whenever they use the tool. This would allow companies that produce the base models to avoid, to the extent possible, having to be the “arbiters of truth” for AI. While this marketplace for fine-tuning and guardrails will lower the pressure on companies to some extent, it doesn’t address the problem of central guardrails. Some content — especially when it comes to images or video — will be so objectionable that it can’t be allowed across any fine-tuned models the company offers. ... How can companies impose centralized guardrails on these issues that apply to all the different fine-tuned models without coming right back to the politics problem Gemini has run head-long into? 


Managers tend to target loyal workers for exploitation, study finds

The researchers hypothesized that managers might view loyal employees as more exploitable, targeting them for exploitation. Alternatively, they considered whether managers might protect loyal workers to retain their allegiance. Four studies were conducted with participants ranging from 211 to 510 full-time managers, recruited via Prolific. In the first study, managers were split into three groups, with the first group reading about a loyal employee named John. The survey then described scenarios requiring someone to work overtime or perform uncomfortable tasks without compensation, querying the likelihood of assigning John to these tasks. The second and third groups underwent similar procedures, but with John described as either disloyal or without any characterization. All participants assessed John’s willingness to make personal sacrifices. ... “Given that workers who agree to participate in their own exploitation also acquire stronger reputations for loyalty, the bidirectional causal links between loyalty and exploitation have the potential to create a vicious circle of suffering for certain workers.” The study sheds light on the relationship between workers’ loyalty and behaviors of managers. 


Mastering the CISO role: Navigating the leadership landscape

CISOs must also cultivate stronger partnerships with their C-suite counterparts. IDC’s survey revealed discrepancies in how CISOs and CIOs perceive the CISO’s role, underscoring the need for better alignment. Creed recounted a recent example where the Allegiant Travel board made decisions about connected aircraft without involving the CISO, leading to a last-minute “fire drill” to address cyber security requirements. “Do you think the board, when they first started talking of going down this path of ‘we’re going to expand the fleet’, considered that there might be security implications in that?” he asked. ... To bridge this gap, CISOs must proactively educate executives on the business implications of security risks and advocate for a seat at the strategic decision-making table. As Russ Trainor, Senior Vice President of IT at the Denver Broncos, suggested, “Sometimes I’ll forward news of the breaches over to my CFO: here’s how much data was exfiltrated, here’s how much we think it cost. Those things tend to hit home.” The evolving CISO role demands a delicate balance of technical expertise, business acumen, and communication prowess. 


How companies are prioritising employee health for organisational success

The HR folks have a critical role in implementing wellness initiatives, believes Ritika. “Fostering a supportive work culture, providing resources for physical and mental health, and advocating for policies that prioritise employee well-being to attract and retain talent effectively are the key priorities for HR leaders are the key responsibilities of HR leaders.” According to Ritika, investing in employee health and well-being is not just a commitment but a cornerstone of organisational ethos. The Human Resource (HR) department plays a pivotal role in promoting and protecting the health of employees within an organisation. As per a report, an alarming 43% of Indian tech workers encounter health issues directly linked to their job responsibilities. Additionally, the study indicates that these health issues go beyond physical ailments, with almost 45% of respondents facing mental health challenges like stress, anxiety, and depression. Samra Rehman, Head of People and Culture, Hero Vired says that HR leaders are responsible for establishing policies and programs that prioritise employee well-being, such as implementing health insurance plans, offering gym memberships or fitness classes, and organising wellness workshops.


Decoding Synchronous and Asynchronous Communication in Cloud-Native Applications

The choice between synchronous and asynchronous communication patterns is not binary but rather a strategic decision based on the specific requirements of the application. Synchronous communication is easy to implement and provides immediate feedback, making it suitable for real-time data access, orchestrating dependent tasks, and maintaining transactional integrity. However, it comes with challenges such as temporal coupling, availability dependency, and network quality impact. On the other hand, asynchronous communication allows a service to initiate a request without waiting for an immediate response, enhancing the system’s responsiveness and scalability. It offers flexibility, making it ideal for scenarios where immediate feedback is not necessary. However, it introduces complexities in resiliency, fault tolerance, distributed tracing, debugging, monitoring, and resource management. In conclusion, designing robust and resilient communication systems for cloud-native applications requires a deep understanding of both synchronous and asynchronous communication patterns. 


Hackers Use Weaponized PDF Files to Deliver Byakugan Malware on Windows

Due to their high level of trust and popularity, hackers frequently use weaponized PDF files as attack vectors. Even PDFs can contain harmful codes or exploits that abuse the flaws in PDF readers. Once this malicious PDF is opened by a user unaware of it, the payload runs and infiltrates the system. ... FortiGuard Labs discovered a Portuguese PDF file distributing the multi-functional Byakugan malware in January 2024. The malicious PDF tricks people into clicking a link by presenting a blurred table. This in turn activates a downloader that puts a copy (requires.exe) and takes down DLL for DLL-hijacking. This runs require.exe to retrieve the main module (chrome.exe). In particular, the downloader behaves differently when called require.exe in temp because malware evasion is evident. FortiGuard Labs discovered a Portuguese PDF file distributing the multi-functional Byakugan malware in January 2024. The malicious PDF tricks people into clicking a link by presenting a blurred table. This in turn activates a downloader that puts a copy (requires.exe) and takes down DLL for DLL-hijacking.


Cybercriminal adoption of browser fingerprinting

While browser fingerprinting has been used by legitimate organizations to uniquely identify web browsers for nearly 15 years, it is now also commonly exploited by cybercriminals: a recent study shows one in four phishing sites using some form of this technique. ... Browser fingerprinting uses a variety of client-side checks to establish browser identities, which can then be used to detect bots or other undesirable web traffic. Numerous pieces of data can be collected as a part of fingerprinting, including:Time zone; Language settings; IP address; Cookie settings; Screen resolution; Browser privacy; User-agent string. Browser fingerprinting is used by many legitimate providers to detect bots misusing their services and other suspicious activity, but phishing site authors have also realized its benefits and are using the technique to avoid automated systems that might flag their website as phishing. By implementing their own browser fingerprinting controls loading their site content, threat actors are able to conceal phishing content in real-time. For example, Fortra has observed threat actors using browser fingerprinting to bypass the Google Ad review process. 




Quote for the day:

"What you do has far greater impact than what you say." -- Stephen Covey

Daily Tech Digest - April 06, 2024

'Leadership? No, Thank You': Navigating A New Organizational Environment Model

The culture that pushes people to be leaders frequently sugarcoats a position like that by showing all the advantages, juicy challenges, fancy bonuses and sparkly cars. The reality is that the responsibility is heavy and being a leader is a lot closer to being a psychologist/coach/rescuer/mom/dad than a hands-on worker. A leadership position calls for emotional intelligence growth, great adaptability and, believe it or not, ego detachment. A great leader is one who does not hoard talent but lets people fly, knows that the best team is made of people who are different from and better than they are, learns how to hold pressure and remain calm and, above all, can be trusted. Show workers this truth. ... Sometimes, not wanting a leadership position may indicate simply that one is afraid of it and not that one doesn’t want it. We all know that. Companies can and must push people out of their comfort zones but also need to maintain a balance of respecting their preferences. How? Training them before a leadership role. Yes. Most companies train leaders after they have assumed a leadership role. 


Modern Application Management Requires Deeper Internet Visibility

Unfortunately, most IT teams today have limited ability to discern how the performance of Internet services is impacting their applications. There are, of course, Internet performance management (IPM) tools capable of surfacing network performance metrics. The challenge and opportunity now is to surface those metrics in context with all the other telemetry data that DevOps teams collect from the various application performance management (APM) and observability platforms they rely on to monitor and troubleshoot application environments. ... Broadly, there are three major classes of blind spots that impact distributed application performance. The first and arguably most opaque are the services provided by third-party vendors. Ranging from content delivery networks (CDNs) to software-as-a-service (SaaS) application, each of these services is controlled by an external service provider that typically doesn’t allows a DevOps team to collect telemetry data by deploying agent software in their IT environments. At best, they may expose an application programming interface (API) to enable an agentless approach to collecting data, but that method doesn’t typically provide the level of control required to optimize application performance.


Ruby on Rails Is Not Dead and May Even Be AI Panacea for Devs

Ruby on Rails has always been promoted as a tool that a single person can use to create a web application — that’s why it was so popular with Web 2.0 entrepreneurs. The Rails website in April 2005 described the framework as “a full-stack, open-source web framework in Ruby for writing real-world applications with joy and less code than most frameworks spend doing XML sit-ups.” While XML is no longer a factor in 2024, DHH continues to do interviews espousing the “joy and less code” philosophy. In an interview with the devtools.fm podcast last month, he even suggested this approach will help developers adapt in the current generative AI era. “As we are now facing perhaps an existential tussle with AI,” he said, “I think it’s never been more important that the way we design programming languages is designed for the human first. The human needs all the help the human can get, if we’re going to have any chance to remain not only just valuable, but relevant as a programmer. And maybe that’s a lost cause anyway, but at least in the last 20 years that I’ve been working with Ruby on Rails, I’ve seen that bet just pay [off] over and over again.”


Business leaders can no longer afford to wait until disruptions occur to measure their financial impact. They need insights to protect the customer and their financial bottom line as quickly and seamlessly as possible. AI and ML provide the means to achieve such agility, offering “quick wins” in the form of immediate financial value. By harnessing accelerators to automate data capture and deliver intelligent insights at the point of disruption, reducing lead time to capture data from several weeks to near real time, they obtain optimized recommendations at the point of disruption across the value network, thus protecting the customer experience and the financial impact on the business in near real time. ... AI contributes to decision intelligence in supply chains. A good example of decision-making processes that have been enhanced by AI is the Amazon Scan, Label, Apply & Manifest (SLAM) process. When a customer places an order, there are multiple microservices and intelligent algorithms that run to find the most optimal way to fulfill it, based on the customer promise and best financial business outcome. 


Is AI driving tech layoffs?

GenAI simply isn’t ready yet. Just like the internet of 1999, the genAI tools of 2024 will eventually get there. But in the meantime, I predict, as Gartner would put it, we’re heading quickly to the “Trough of Disillusionment.” That’s where the initial burst of excitement over a new technology runs out and everyone realizes the reality isn’t close to what we all dreamed it would be. I’ve seen too many of these bubbles over the years and still we fall for it every time. What’s different now, and why the coming fall will hurt so much, is that almost every company has fallen under the genAI spell. Not only are businesses planning to move to it, they’re already replacing the people they need to get their work done with half-baked AI models. This is going to greatly accelerate the coming crash. Don’t get me wrong. GenAI will eventually replace some jobs. But former US. Treasury Secretary and current OpenAI board member Larry Summers gets it right. He recently said, “If one takes a view over the next generation, this could be the biggest thing that has happened in economic history since the Industrial Revolution.”


Unlocking the Power of Generative AI in Banking: Insights from Microsoft’s Daragh Morrissey

The first use case I would start with is your developers. It’s the most mature generative AI scenario. And as you build new applications for this, why not build them with generative AI? Then I would think about the out-of-the-box AI, gen AI that you’ll get from us if you start introducing it to Teams and Office. You’ll hit a ton of use cases there that are sort of horizontal across the whole business. Then, you’ll be left with a set of custom use cases. These could be things like I would start; you could start with a contact center, just enhancing what you currently have in your contact center. You don’t have to rip out your contact center, either. It’s just about sort of adding the capabilities on top. Building a knowledge base is also a great way of learning how to use this inside the organization. ... One of the things we did as well was create this concept of a citizen developer or citizen data scientist. You could just take a set of data, and we can prompt you to say, “It looks like you need one of these models, that could be sentiment analysis or something.” Then, it will build a model with the data.


Critical Bugs Put Hugging Face AI Platform in a 'Pickle'

In examining Hugging Face's infrastructure and ways to weaponize the bugs they discovered, Wiz researchers found that anyone could easily upload an AI/ML model to the platform, including those based on the Pickle format. Pickle is a widely used module for storing Python objects in a file. Though even the Python software foundation itself has deemed Pickle as insecure, it remains popular because of its ease of use and the familiarity people have with it. "It is relatively straightforward to craft a PyTorch (Pickle) model that will execute arbitrary code upon loading," according to Wiz. Wiz researchers took advantage of the ability to upload a private Pickle-based model to Hugging Face that would run a reverse shell upon loading. They then interacted with it using the Inference API to achieve shell-like functionality, which the researchers used to explore their environment on Hugging Face's infrastructure. That exercise quickly showed the researchers their model was running in a pod in a cluster on Amazon Elastic Kubernetes Service (EKS). ... With Hugging Face Spaces, Wiz found an attacker could execute arbitrary code during application build time that would let them examine network connections from their machine.


Sophisticated Latrodectus Malware Linked to 2017 Strain

While initial analysis suggested Latrodectus is a new variant of IcedID, subsequent research found that it is a new malware most likely named Latrodectus because of a string identified in the code. Latrodectus employs infrastructure used in historic IcedID operations, indicating potential ties to the same threat actors. IcedID, first discovered in 2017, has been described as a banking Trojan and remote access Trojan. Researchers discovered insights into the activities of threat actors TA577 and TA578 - the primary distributors of Latrodectus that illustrate the evolving tactics threat actors have used over time. TA577, previously known for its distribution of Qbot, used Latrodectus in three campaigns in November 2023 before switching back to Pikabot. In contrast, TA578 has been predominantly distributing Latrodectus since mid-January 2024, using contact forms and impersonation techniques to deliver the malware to targets. Latrodectus functions as a downloader, and its primary objective is to download payloads and execute arbitrary commands. Its sandbox evasion techniques are noteworthy, and it shares similarities with the IcedID malware. 


Deceptive AI: The Human-Machine Romance

Like God, AI bots assure us they are omnipresent and omniscient and can be a panacea for all our emotional needs, a claim that is too good to be true. All of us, at different points in our lives, have witnessed miserable Bot failures while responding to well-scoped, structured and sequenced business processes. Then how on earth do we even believe a neural network can handle complex, unstructured human emotions? The outcomes will be insanely unpredictable. That is what exactly happened with 21-year-old Jaswant Singh Chail when he was coerced by a romantic chatbot to break into Windsor Castle to kill the Queen of England. He is now serving a prison sentence, still firmly believing the AI bot is an incarnation of the angel who will eventually reunite with him. Don’t see this scenario in isolation; such AI bots in the hands of extremists can be a game changer in recruiting and radicalizing younger minds to carry out unspeakable crimes (Remember gory effects of the suicidal game “Blue Whale”?); unethical business houses can leverage such channels to boost their product sales.


Six reasons to go colo

Historically, enterprises have built, equipped, and operated their own data centers according to need – both capacity needs and on different geographical locations. While this approach has qualities in terms of being tailor-made for your specific operations – the infrastructure lacks the scalability, flexibility and sometimes even cloud connectivity required to gain and keep a competitive advantage in today’s fast-paced markets. Furthermore, the investments related to constructing your own data center are highly capital-intensive, which makes it difficult for some companies to pursue such a strategy. Against this background, the reasons for the fast-paced growth of the data center colocation industry becomes easy to grasp. Data center colocation is a considerably more accessible, scalable, and cost-efficient solution for your facility. When using data center colocation, you consume the physical data center ‘as a service’. The often-used expression ‘let your business focus on what it does best’ applies well here. You get the peace-of-mind knowing your ground-bounded infrastructure has a secured uptime and you leave the matters of cooling, electricity supply and physical security to a partner who is an expert in exactly that.



Quote for the day:

"You may only succeed if you desire succeeding; you may only fail if you do not mind failing." -- Philippos

Daily Tech Digest - April 05, 2024

Time for a Disaster Recovery Health Check

By law, the board and C-level officers have the responsibility of executing due diligence and competence in the conduct of company operations and asset securement. They do not want to see gaping holes and exposures in corporate due diligence that are due to disaster recovery plans that fail to address new security threats or the presence of edge technology. This board awareness can give IT the leverage it needs to prioritize its DR plan update so the revised plan can cover a much broader IT footprint than just the central data center -- and by doing so, address new risks. A good way to present to the board and corporate officers a need to invest time and resources into a DR plan update is to present the need for an update along with the risks of not doing one. This can be accomplished by describing various disaster scenarios that have actually happened to other organizations and explain how they could plausibly happen to the company itself. By showing real life situations, the CIO can present the most likely disaster scenarios and consequences and what is needed in terms of plan revisions and investments to minimize those risks.


Mastering HTTP DDoS attack defence: Innovative strategies for web protection

The threat landscape for HTTP DDoS attacks is constantly expanding, with attackers continually developing new techniques to evade detection. Some common methods include using HTTP GET or POST requests to consume server resources, leveraging malformed HTTP headers to confuse web applications, and employing slowloris attacks that open and maintain multiple connections to the server without closing them, eventually exhausting server resources. These attacks can have devastating effects on businesses, including service disruption, loss of customer trust, and significant financial losses. The need for effective mitigation strategies has never been more critical. ... While Radware’s products offer robust protection against HTTP DDoS attacks, it’s essential for businesses to adopt a proactive security posture. Some best practices include:Regularly Updating Security Systems: Ensure that all security systems are up to date with the latest signatures and detection algorithms. Implementing Access Control Lists (ACLs): Use ACLs to restrict access to resources, minimising the potential impact of an attack. 


5 Cybersecurity Questions Boards Can’t Afford To Ignore

When it comes to cybersecurity incident response, companies need to practice, practice, practice. For the board, that could look like ensuring there are not only clear steps in place for what needs to happen should an incident occur but also ensuring that those steps have been practiced ahead of time in some sort of tabletop exercise—similar to how you’d practice exiting the building during a fire drill to know where to go in the event of an emergency. ... While it’s not a requirement, some businesses have decided to add a cybersecurity or technology expert directly to the board to guide them on risk. Businesses can decide if this makes sense for their risk needs depending on their individual risk profiles. Many former CISOs or former cybersecurity leaders are looking to sit on or advise boards, as well as businesses’ own CISOs. ... Directors should also ask themselves if they are budgeting enough for cybersecurity across the organization. They should also work to understand what financial impacts or even regulatory fines they could face if they don't invest in cybersecurity appropriately or report incidents as required. Is your organization investing enough?


Is it Already Too Late to Get Started with GenAI? No, But the Clock’s Ticking

At some point institutions will bite the bullet and let GenAI interact with customers or with their live data. Smith believes it will be important to alert customers when they are offered a product or process in which generative artificial intelligence plays a role. Both staff and the public must be clear on this and have confidence in what the bank discloses. Some consumer paranoia exists about GenAI, but Smith says Accenture research indicates that many customers will accept its use with their data — if they feel that they are receiving some benefit in return. Asked for an example, Smith points to a frequent beef about bank customer service — having to explain a situation all over again when trying to work out a problem or address a need and being transferred from one staffer to another. “I shouldn’t have to educate you on what happened two days ago,” says Smith. “I want to walk in and find that you already know.” GenAI can help with this type of situation. One last pointer from Smith concerns the customer experience of using processes controlled by GenAI. She says that some customer-facing GenAI provides inferior look and feel, which can become a friction point.


Technical Debt and the Hidden Cost of Network Management: Why it’s Time to Revisit Your Network Foundations

An often-overlooked example of this growing debt is a failure to actively manage and optimize IP addresses and Domain Name System (DNS) configurations—the very pillars of corporate network communication. Internet service providers (ISPs) of dedicated Internet access (DIA) to businesses would often assign blocks of addresses to customers. If those customers ever cancel or change providers, there's often a clean-up process to recover those resources. Businesses going through reorganizations or mergers and acquisitions may lose track (or may never have had good records) of IP address ranges. Security policies and routing policies may then become outdated, leaving an IP address hijacker a window inside the perimeter security measures. Companies using Network Address Translation (NAT) or Carrier-Grade NAT (CGNAT) to share one IP address among many devices may find that those functions, which edit data packets in flight, create unexpected failure modes. Sometimes, they hide problems, such as when malware or address snooping is happening within the boundary of the NAT. 


Bank Four Zeros Crucial for Banking Resilience, says Huawei

“Banking Everywhere means that, from a technology point of view, we have to make sure that any transactions should 100% not result in any problems or risks. It’s not easy. For example, on the Tube in London, how do you deliver 5G to enable transactions? Banking Everywhere is easy to say, but hard to do.” Cao highlights the high availability and easy migration of GaussDB. He says after GaussDB was deployed in one of the biggest banks in China, the recovery time objective (RTO) was slashed to 120 seconds – a world-leading level. He says Huawei is working on reducing this to just 30 seconds. ... “Some banks say we are in the AI era, some say the intelligent era, some say the open banking era, but a lot of banks still struggle working with a traditional model. So today it is like a multi-generational industry,” he says. “On the one hand, exciting things are coming, like Gen AI, and we have to be prepared. We have to be realistic to see what challenges we face today. If a bank is not resilient enough, it’s very hard to embrace Gen AI, or any intelligent opportunities.


The Three Pillars of HIPAA Compliance

Policies and procedures must be developed on all aspects of HIPAA but not just to allow boxes to be ticked in a HIPAA compliance checklist. That may be sufficient to pass a very basic document review, but policies alone will not make an organization HIPAA compliant. All members of the workforce must be provided with the policies and must receive training relevant to their role. Every individual in a healthcare organization has a role to play in making their organization HIPAA compliant and must be trained to allow them to perform their duties in a HIPAA-compliant way. Employees should not have to guess how HIPAA applies. In addition to training, employees must be made aware of the sanctions policy and the repercussions of HIPAA violations and the sanctions policy must be enforced. HIPAA calls for training to be provided during the onboarding process, regardless of whether a new hire is a seasoned healthcare professional or is new to the industry. It is the responsibility of the compliance officer to ensure that appropriate training programs are developed and that all members of the workforce receive adequate training. 
And herein lies the problem, says Ma: When business stakeholders seek to understand EA’s value proposition, or even check the status of a project, they may get different answers depending on whom they ask. It’s the three-blind-men-describing-an-elephant problem: The man who feels the tail describes an animal very different from the one described by the man feeling the abdomen, or by the one feeling the ears and tusks. Though the variety in their descriptions may reflect the function’s comprehensiveness, to the uneducated executive, it sounds like misalignment. ... First, the full-stack architect could ensure the function’s other architects are indeed aligned, not only among themselves, but with stakeholders from both the business and engineering. That last bit shouldn’t be overlooked, Ma says. While much attention gets paid to the notion that architects should be able to work fluently with the business, they should, in fact, work just as fluently with Engineering, meaning that whoever steps into the role should wield deep technical expertise, an attribute vital to earning the respect of engineers, and one that more traditional enterprise architects lack.


What is cloud native and how can it generate business value?

Cloud-native workloads are conceived with the cloud in mind, focusing on scalability, agility, and application independence. Building applications directly in the cloud is a core component of agile development and DevOps practices, allowing developers to quickly update their stack to meet business needs. A definitive example of cloud-native workloads would be microservices, in which different elements of an application run in parallel as independent services, communicating via application programming interfaces (APIs). ... “Companies who deliver cloud services have demonstrated that they can … free us up from having to manage infrastructure hands-on,” Purcell tells ITPro. In the “old days,” Purcell explains, a prospective provider of cloud-native services would have had to rent property, procure hardware, and have a core IT team capable of establishing a framework to deliver the application. “By running it in the cloud … you just don't have to do any of that,” Purcell says. Because the main public cloud providers continue “building for builders,” as Purcell puts it, cloud native continues to be a dominant concept. Of course, applications in the private cloud can be cloud native as well.


Why It’s Time to Rethink Generative AI in the Enterprise

One of the biggest changes is the increasing availability of foundation models beyond those supplied by companies that specialize in generative AI services. In addition to open-source models that have been released by companies like Meta and Google, we’re now seeing vendors like SAP developing their own foundation models. Crucially, these models will provide greater opportunity for enterprises to custom-model operations by injecting their own parameters to control the context in which the model operates. In some cases, they can also train or retrain models on custom data. ... Integrating AI models with all the data that exists in a business is a complex task, not least because it is often unclear which dataset is most relevant for a specific use case. For instance, when querying sales data, should the model be prompted using data from the ERP system, the CRM, a manually prepared spreadsheet, or something else? To tackle this issue, businesses are likely to adopt what I refer to as “data dispatchers.” A data dispatcher is an integration tool that efficiently exposes data to GenAI services in an efficient way, making it easy for enterprises to leverage their data for custom model training. 



Quote for the day:

"Great achievement is usually born of great sacrifice, and is never the result of selfishness." -- Napoleon Hill

Daily Tech Digest - April 04, 2024

Transforming CI/CD Pipelines for AI-Assisted Coding: Strategies and Best Practices

Most source code management tools, including Git, support tagging features that let developers apply labels to specific snippets of code. Teams that adopt AI-assisted coding should use these labels to identify code that was generated wholly or partially by AI. This is an important part of a CI/CD strategy because AI-generated code is, on the whole, less reliable than code written by a skilled human developer. For that reason, it may sometimes be necessary to run extra tests on AI-generated code — or even remove it from a codebase in the event that it triggers unexpected bugs. ... Along similar lines, some teams may find it valuable to deploy extra tests for AI-generated code during the testing phase of their CI/CD pipelines, both to ensure software quality and to catch any vulnerable code or dependencies that AI introduces into a codebase. Running those tests is likely to result in a more complex testing process because there will be two sets of tests to manage: those that apply only to AI-generated code, and those that apply to all code. Thus, the testing stage of CI/CD is likely to become more complicated for teams that use AI tools.


Revolutionising Regulatory Compliance: AI & ML Powering The Future Of Financial Governance

The use of technology is quickly transforming how businesses handle compliance challenges. AI helps by automating tasks like monitoring and reporting. It quickly finds new regulatory requirements in a sea of information and ensures adherence by the organisation. Machine learning, a type of AI, is good at spotting patterns and unusual things, which is important for following rules. By looking at historical data, it can predict possible risks, so companies can deal with them early. Compliance officers can use AI tools to do routine tasks, handle hard problems, and be more open with regulators. AI’s smart systems make compliance work smoother and more accurate. Looking forward, AI’s contribution to compliance seems promising. Predictive compliance management, powered by AI, will move from reacting to problems to spotting risks early, which could save companies from legal trouble. Real-time monitoring and personalised solutions for each company will become common, making compliance easier and better. Also, AI will work with other new technologies like blockchain and IoT to improve compliance.


Codium announces Codiumate, a new AI agent that seeks to be Devin for enterprise software development

Codium hopes that Codiumate will aid developers in their workflow, speeding up all the manual typing they would otherwise have to do, doing the “heavy lifting” and mechanical coding work, while enabling the developer to act more as a hands-on product manager overseeing the process and course correcting it as necessary, almost as though it is a junior developer or new hire to the team. The technology powering the Codiumate agent on the backend is “best of breed” OpenAI models, according to Friedman. The company is also experimenting with Anthropic’s Claude and Google’s Gemini. It also offers its own large language model (LLM) designed with its AlphaCodium process that increases the performance of other LLMs in code completion tasks. While the former is available to all users, the latter Codium LLM is only for paying enterprise users. Friedman said it is superior to OpenAI’s models on coding and that a “Fortune 10” company that could not be named due to confidentiality reasons was already using it in production.


Healthcare’s cyber resilience under siege as attacks multiply

Every healthcare organization must ensure employees are well aware of and trained about potential threats. It’s critical to ensure they understand how to navigate and evaluate everything that comes in. One requirement could be to only open emails from known senders or to only open attachments if they are secure. Many organizations’ security teams will conduct resilience tests and distribute suspicious-looking emails to see which employees will click it. Modern spam filters are relatively adept at weeding out risky emails, but anyone with an inbox knows that many get through to end users. Most employers issue computers and devices, allowing for secured settings maintained by IT departments. It’s important to keep access and logins only to those devices and not on any personal devices, which are typically much easier attack points to enter a system. Maintaining robust security settings on issued machines is especially important if the employee will be working from remote locations, including at home, where network security tends to not be as robust as within enterprises.


Instilling the Hacker Mindset Organizationwide

Visibility is a foundational principle that suggests you can't secure what you don't know about. Lack of a security team's visibility is a gold rush for hackers because they typically infiltrate an organization's network via hidden or sneaky entry points. If you don't have visibility, there will undoubtedly be a way in. Without visibility into all traffic within an organization's infrastructure, threat actors can continue to lurk in the network and grant themselves access to the organization's most sensitive data. With 93% of malware hiding behind encrypted traffic but only 30% of security professionals claiming to have visibility, it's no wonder that there were more ransomware attacks in the first half of 2023 than in all of 2022. Once a cybercriminal has made their way into the network, time is limited. Only with visibility can the cybercriminal be stopped from wreaking havoc and gaining access to company data. When cybersecurity professionals can better understand the mysterious nature of hackers and how they work, they can better protect their own systems and valuable customer data. It's critical to stay vigilant not only when it comes to major security issues, but also with minor lags in security best practice.


Separating the signals from the noise in tech evolution

With technology trends extensively covered across all forms of media, IT leaders often get questions or advice from well-meaning senior colleagues on what trends to adopt. However, not every trend warrants immediate attention or even playing catch-up if you’re late to the party. Wise leaders often opt to be “smart laggards” who focus on adopting and scaling the trends that really matter to their organizations. And they focus on demonstrating value quickly or stopping pilots or initiatives that are not delivering. ... In the current environment of uncertainty, marked by persistent macroeconomic challenges, global fragmentation, and growing cybersecurity challenges, tech leaders shared their perspectives on risks and resilience. More than one described reinventing the technology function and its value proposition in times of crisis, taking a “through-cycle mindset”: pushing forward in times of crisis rather than retrenching, and focusing on long-term value creation to help the company emerge stronger when conditions change. We also discussed how dashboards should balance short- to mid-term KPIs with long-term value delivery.


Navigating risks in AI governance – what have we learned so far?

In the face of a regulatory void, several entities have taken it upon themselves to establish their own standards aimed at tackling the core issues of model transparency, explainability, and fairness. Despite these efforts, the call for a more structured approach to govern AI development, mindful of the burgeoning regulatory landscape, remains loud and clear. ... However, an AI Risk and Security (AIRS) group survey reveals a notable gap between the need for governance and its actual implementation. Only 30% of enterprises have delineated roles or responsibilities for AI systems, and a scant 20% boast a centrally managed department dedicated to AI governance. This discrepancy underscores the burgeoning necessity for comprehensive governance tools to assure a future of trustworthy AI. ... The patchwork of regulatory approaches across the globe reflects the diverse challenges and opportunities presented by AI-driven decisions. The United States, for example, saw a significant development in July 2023 when the Biden administration announced that major tech firms would self-regulate their AI development, underscoring a collaborative approach to governance.


Unlocking Personal and Professional Growth: Insights From Incident Management

The skills and lessons gained from Incident Management are highly transferable to various aspects of life. For instance, adaptability is crucial not only in responding to technical issues but also in adapting to changes in personal circumstances or professional environments. Teamwork teaches collaboration, conflict resolution, and empathy, which are essential in building strong relationships both at work and in personal life. Problem-solving skills honed during incident response can be applied to tackle challenges in any domain, from planning a project to resolving conflicts. Resilience, the ability to bounce back from setbacks, is a valuable trait that helps individuals navigate through adversity with determination and a positive mindset. Continuous improvement is a mindset that encourages individuals to seek feedback, reflect on experiences, identify areas for growth, and strive for excellence. This attitude of continuous learning and development not only benefits individuals in their careers but also contributes to personal fulfillment and satisfaction.


How to build a developer-first company

Providing a great developer experience—by enabling our customers to easily add auth flows and user management to their apps—leads to a great end-user experience as the customer’s customers seamlessly and securely log in.This kind of virtuous cycle exists at many developer-focused companies. When building a successful developer-first business, it’s critical to tie together the similarities between the customer experience and the developer experience while clearly delineating the differences. ... When helping developers build their customer experience, we emphasize building onboarding and authentication flows with the best user experience in mind. That includes reducing friction, like the use of passwordless methods and progressive profiling, and creating an embedded in-app native experience to avoid needless redirections or pop-ups. Our developer experience includes an onboarding wizard that sets up their project and login flows in a few clicks. We offer a drag-and-drop visual workflow editor to easily create and customize their customer journey. We also provide robust documentation, code snippets, SDKs, tutorials, and a Slack community for troubleshooting. 


How to fix the growing cybersecurity skills gap

Organizations looking to upskill their cybersecurity professionals should consider adjusting and reorganizing key workflows to give the entire security team — aside from just the CISO — ample time to research emerging threats and remain up to date on what the ramifications of these threats may be. By automating repetitive tasks for these team members or restructuring key processes and timelines, the entire team, from CISO to analyst, can have more time to dedicate towards staying ahead of industry trends and cyber-attacks, ultimately strengthening the organization’s ability to detect and respond to threats in the long run. Giving employees time and space to be curious and explore the latest threat intelligence, commentary and insight — including topic-based tabletop exercises or red teaming — will yield significant dividends in understanding the organization's true security posture and preparedness. In today’s cybersecurity landscape, companies must strive to be a learning-forward organization. Tangible adoption of this principle must go beyond formal skills and training — every encounter your teams have with a threat or an attack is a learning opportunity.



Quote for the day:

"Though no one can go back and make a brand new start, anyone can start from now and make a brand new ending." -- Carl Band

Daily Tech Digest - April 03, 2024

What is identity fabric immunity? Abstracting identity for better security

An identity fabric becomes an attractive option when it is merited, but the adoption of it before it is really called for adds unnecessary complexity. The key is knowing the tipping point. If it is doing the job with minimal friction, a simple identity provider framework is sufficient. When the infrastructural complexity begins to cause serious difficulty within the organization, the security abstraction layer described by IFI offers a way forward, says Dmitry Sotnikov, chief product officer at Cayosoft. “Applications are now highly distributed, and users, partners, and customers log into systems from wherever they are, leaving security teams without an easily defined network and physical boundary to protect.” Signs that identity solutions are inadequate include difficulty in managing user access, account provisioning, and response to security incidents both real and simulated. Managers may find that it is very hard to gain an overhead perspective on the security disposition of an enterprise and taking actions that affect security as a whole is cumbersome or extremely challenging.


Cyber attacks on critical infrastructure show advanced tactics and new capabilities

The interconnectedness of critical infrastructure assets, devices, and systems with third parties throughout the software supply chain has made identifying attack paths more complex than ever before. This interconnectedness creates numerous potential entry points for attackers to exploit. Additionally, cyber adversaries now possess a range of new tactics. ... Recent attacks on entities like Colonial Pipeline and water treatment plants demonstrate the potential for malicious actors to cause real-world impacts with just a few clicks. Ransomware criminals are increasingly targeting industries that rely heavily on operational systems, knowing that downtime can result in significant financial losses. Ransomware-as-a-Service (RaaS) has further fueled the proliferation of ransomware attacks, making these attacks more accessible to a wider range of threat actors. It’s important to note that criminal ransomware operators don’t typically use the zero-days that make headlines, or cyberwarfare-level capabilities; they exploit known vulnerabilities that have been unpatched for years.


Feds Ask Telcos: How Are You Combating Location Tracking?

The problems stem from the trust-based approach underpinning SS7, which is used to secure 3G and earlier networks, and Diameter, which is used to secure 4G. As detailed in a white paper from Swedish telecommunications giant Ericsson, both protocols take a trust-based approach, assuming that any network elements communicating with each other should be doing so. Even though Diameter is a newer protocol, it lacks security capabilities. "Diameter does not encrypt originating IP addresses during transport, which increases the risk of network spoofing, where an attacker poses as a legitimate roaming partner on a network to gain access to the network," the FCC said. Since SS7 and Diameter still serve as "the foundation for mobile telephone networks, especially for roaming capabilities to be able to interconnect networks," as networks expand their coverage and new networks and more users appear, "the opportunity for a bad actor to exploit SS7 and Diameter has increased," the FCC said. While the use of protocols such as SS7 and Diameter can be restricted to secure tunnels, thus making them more secure, the use of secure tunneling isn't mandatory, Ericsson said.


Avoiding the dangers of AI-generated code

As the adoption of AI tools to create code increases, organizations will have to put in place the proper checks and balances to ensure the code they write is clean—maintainable, reliable, high-quality, and secure. Leaders will need to make clean code a priority if they want to succeed. Clean code—code that is consistent, intentional, adaptable, and responsible—ensures top-quality software throughout its life cycle. With so many developers working on code concurrently, it’s imperative that software written by one developer can be easily understood and modified by another at any point in time. With clean code, developers can be more productive without spending as much time figuring out context or correcting code from another team member. When it comes to mass production of code assisted by AI, maintaining clean code is essential to minimizing risks and technical debt. Implementing a “clean as you code” approach with proper testing and analysis is crucial to ensuring code quality, whether the code is human-generated or AI-generated. Speaking of humans, I don’t believe developers will go away, but the manner in which they do their work every day will certainly change. 


Biggest problems and best practices for generative AI rollouts

The first step in the genAI journey is to determine the AI ambition for the organization and conduct an exploratory dialogue on what is possible, according to Gartner. The next step is to solicit potential use cases that can be piloted with genAI technologies. Unless genAI benefits translate into immediate headcount reduction and other cost reduction, organizations can expect financial benefits to accrue more slowly over time depending on how the generated value is used. For example, Chandrasekaran said, an organization being able to do more with less as demand increases, to use fewer senior workers, to lower use of service providers, and to improve customer and employee value, which leads to higher retention, are all financial benefits that grow over time. Most enterprises are also customizing pre-built LLMs, as opposed to building out their own models. Through the use of prompt engineering and retrieval-augmented generation (RAG), firms can fine-tune an open-source model for their specific needs. RAG creates a more customized and accurate genAI model that can greatly reduce anomalies such as hallucinations.


Digital Transformation: What Should be Next on Your Agenda?

Paying close attention to disruptive emerging technologies will help to future-proof strategies, Buchholz says. "Have you accounted for the impact of quantum computing?" he asks. "Within several years, it's likely to go from lab curiosity to useful tool." How about digital twins or the spatial web? "Not all of these [technologies] will come to pass but investing a few days up front can save years of pain down the road." Focus on digital transformation initiatives that have the highest potential to create value for the organization and its stakeholders, Bakalar advises. "Avoid wasting time, money, and effort on projects with low strategic value, feasibility or urgency." The best way to prioritize a digital transformation strategy is by defining precisely what transformation means to your organization, says Jed Cawthorne, modern work practice lead with IT consulting firm Creospark, via email. "Develop the strategy appropriately and, from there, prioritize the projects that will form your transformation plan." If one takes the approach of focusing on smaller, easier to digest transformation projects, you can reassess your prioritization at the completion of each project, Cawthorne says.


User privacy must come first with biometrics

Use cases have expanded to airports with biometric boarding, mobile banking and e-commerce to facilitate and authenticate transactions, and even with various branches of law enforcement using it for surveillance purposes. The benefits of AI-powered facial recognition technology are off the charts, with potential for dramatic increases in efficiency, security and ease of use across industries. But with upside comes an equally compelling downside, as organizations need to consider the privacy risks and concerns associated with collecting and using biometric data at scale. ... As biometrics continues to go mainstream, data discovery, data classification and the handling of sensitive information will become mainstay on IT task lists. But the key to not overwhelming IT is to incorporate data privacy principles and tactics at the start of development, so problems can be tackled proactively rather than reactively. This will be tech’s main challenge in the coming years. With AI fever everywhere, users will soon expect to access facial recognition services and products in a more personalized, efficient way, without compromising on the privacy front.


Be the change: Leveraging AI to fast track next-gen cyber defenders

With AI, enterprises can detect and prevent threats with speed and efficiency and secure a broader range of assets better than humans alone. They’re no longer limited by how many people are in their Security Operations Center or the expertise of their team. Instead, they are empowered to see things in real time and defend their environment against attacks in an infinitely scalable way. But AI can’t act alone and automation can only go so far. Humans will always be needed in the loop to decide what to do with the data and insights it provides. AI can be used to support these people and supercharge their capabilities. Consider the following: The job of a threat hunter is to translate these concepts into queries. But this requires knowledge of complex languages and coding skills that are in short supply. AI-based platforms allow security teams to ask complex threat and adversary-hunting questions using natural language, and within seconds provide insights and recommended response actions that can be immediately executed. Entry-level threat hunters once limited in what they could solve can move to the next level and veterans can become more efficient, effective, and strategic.


Why A Bad LLM Is Worse Than No LLM At All

For an LLM to return a useful output, it needs to have interpreted the user’s query or prompt the way it was intended. There is a lot of nuance in language that can lead to misunderstandings and no solution exists yet that has guardrails to ensure consistent—and accurate—results that meet expectations. ... LLMs, including ChatGPT, have been known to simply make up data to fill in the gaps in their knowledge just so that they can answer the prompt. They are designed to produce answers that feel right, even if they aren’t. If you work with vendors supplying LLMs within their products or as standalone tools, it’s critical to ask them how their LLM is trained and what they’re doing to mitigate inaccurate results. ... The majority of LLMs on the market are available publicly online, which makes it incredibly challenging to safeguard any sensitive information or queries you input. It’s very likely that this data is visible to the vendor, who will almost certainly be storing and using it to train future versions of their product. And if that vendor is hacked or there’s a data leak, expect even bigger headaches for your organization. 


Building Resilient Cybersecurity Into Supply Chain Operations: A Technical Approach

One of the key challenges in supply chain cybersecurity is the interdependent nature of the supply chain. A single weak link in the chain can compromise the entire operation. For example, a cyberattack on a supplier could disrupt production, leading to delays, financial loss, and damage to the company's reputation. Moreover, the growing trend of digital transformation has led to an increase in the use of technologies such as Internet of Things (IoT) devices, cloud computing, and artificial intelligence in supply chain operations. While these technologies offer numerous benefits, they also increase the surface area for potential cyberattacks. ... The digital transformation of supply chains has led to the integration of various technologies such as IoT devices, cloud platforms, and AI-based systems. While these technologies have enhanced efficiency and productivity, they have also increased the complexity of the cybersecurity landscape. Ensuring the security of these diverse technologies, each with its own set of vulnerabilities, is a significant technical challenge.



Quote for the day:

"The distance between insanity and genius is measured only by success." -- Bruce Feirstein

Daily Tech Digest - April 02, 2024

A double-edged sword: GenAI vs GenAI

Every technology indeed presents new avenues for vulnerabilities, and the key lies in maintaining strict discipline in identifying and addressing these vulnerabilities. This calls for the strict application of IT ethos in organisational setups to ensure no misuse of technologies, especially intelligent ones. “It is crucial to continuously test your APIs and applications, relentlessly seeking out any potential vulnerabilities and ensuring they are addressed promptly. This proactive approach is vital in safeguarding your platform against potential threats,” says Sunil Sapra, Co-founder & Chief Growth Officer, Eventus Security. The Government of India has proactively addressed the grave importance of cybersecurity and recently rolled out the much-awaited Digital Personal Data Protection Act 2023. The Act though takes into consideration data protection and data privacy laying emphasis on the ‘consent of the owner’, but it does not draw the spotlight on GenAI that can make or break the existing cyber fortifications. Hence, there is a dire need for strong regulations and control measures guarding the application of GenAI models.


There's more to cloud architecture than GPUs

GPUs require a host chip to orchestrate operations. Although this simplifies the complexity and capability of modern GPU architectures, it’s also less efficient than it could be. GPUs operate in conjunction with CPUs (the host chip), which offload specific tasks to GPUs. Also, these host chips manage the overall operation of software programs. Adding to this question of efficiency is the necessity for inter-process communications; challenges with disassembling models, processing them in parts, and then reassembling the outputs for comprehensive analysis or inference; and the complexities inherent in using GPUs for deep learning and AI. This segmentation and reintegration process is part of distributing computing tasks to optimize performance, but it comes with its own efficiency questions. Software libraries and frameworks designed to abstract and manage these operations are required. Technologies like Nvidia’s CUDA (Compute Unified Device Architecture) provide the programming model and toolkit needed to develop software that can harness GPU acceleration capabilities.


How to Evaluate the Best Data Observability Tools

Some key areas to evaluate for enterprise readiness include:Security– Do they have SOC II certification? Robust role based access controls? Architecture– Do they have multiple deployment options for the level of control over the connection? How does it impact data warehouse/lakehouse performance? Usability– This can be subjective and superficial during a committee POC so it’s important to balance this with the perspective from actual users. Otherwise you might over-prioritize how pretty an alert appears versus aspects that will save you time such as ability to bulk update incidents or being able to deploy monitors-as-code. Scalability– This is important for small organizations and essential for larger ones. We all know the nature of data and data-driven organizations lends itself to fast, and at times unexpected growth. What are the largest deployments? Has this organization proven its ability to grow alongside its customer base? Other key features here include things like ability to support domains, reporting, change logging, and more. These typically aren’t flashy features so many vendors don’t prioritize them.


CISA releases draft rule for cyber incident reporting

According to the proposed rules, CISA plans to use the data it receives to carry out trend and threat analysis, incident response and mitigation, and to inform future strategies to improve resilience. While the rule is not expected to be finalized until 18 months from now or potentially later next year, comments are due 60 days after the proposal is officially published on April 4. One can be sure that the 16 different critical infrastructure sectors and their armies of lawyers will have much to say. The 447-page NOPR details a dizzying array of nuances for specific sectors and cyber incidents. ... The list of exceptions to the cyber incidents that critical infrastructure operators will need to report is around twice as long as the conditions that require reporting an incident, and the final shape of the rule may change as CISA considers comments from industry. The companies affected by the proposed rules include all critical infrastructure entities that exceed the federal government’s threshold for what is a small business. The rules provide a series of different criteria for whether other critical infrastructure sectors will be required to report incidents.


Digital transformation’s fundamental change management mistake

the bigger challenge is often downstream and occurs when digital trailblazers, the people assigned to lead digital transformation initiatives, must work with end-users on process changes and technology adoption. When devops teams release changes to applications, dashboards, and other technology capabilities, end-users experience a productivity dip before people effectively leverage new capabilities. This dip delays when the business can start realizing the value delivered. While there are a number of change management frameworks and certifications, many treat change as separate disciplines from the product management, agile, and devops methodologies CIOs use to plan and deliver digital transformation initiatives.  ... Reducing productivity dips and easing end-user adoption then are practices that must fit the digital and transformation operating model. Let’s consider three areas where CIOs and digital trailblazers can inject change management into their digital transformation initiatives in a way that brings greater effectiveness than if change management were addressed as a separate add-on.


6 keys to navigating security and app development team tensions

Unfortunately, many organizations don’t take the proper steps, leading to the development team viewing security teams as a “roadblock” — a hurdle to overcome. Likewise, the security team’s animosity toward development teams grows as they view developers as not “taking security seriously enough.” ... When you have an AppSec team built just by security people who have never worked in development, that situation will likely cause friction between the two groups because they will probably always speak two languages. And neither group understands the problems and challenges the other team faces. When you have an AppSec team that includes prior developers, you will see a much different relationship between the teams. ... Sometimes, there are unreasonable requests because the security team asks for things that aren’t actual issues to be fixed. This happens when they run an application vulnerability scanner, and the scanner reports a vulnerability that doesn’t exist or expose an actual risk. The security team blindly passed that on to developers to remedy.


Enhancing Business Security and Compliance with Service Mesh

When implementing a service mesh, there are several important factors you should consider for secure and compliant deployment.You should carefully evaluate the security features and capabilities of the chosen service mesh framework. Look for strong authentication methods like mutual TLS and support for role-based access control (RBAC) to ensure secure communication between services. Establish clear policies and configurations for traffic management, such as circuit breaking and request timeouts, to mitigate the risk of cascading failures and improve overall system resilience. Thirdly, consider the observability aspects of the service mesh. Ensure that metrics, logging, and distributed tracing are properly configured to gain insights into service mesh behavior and detect potential security incidents. For example, leverage tools like Prometheus for metrics collection and Grafana for visualization to monitor key security metrics such as error rates and latency. Maintaining regular updates and patches for the service mesh framework is important to address any security vulnerabilities promptly. You should stay informed about the latest security advisories and best practices provided by the service mesh community.


Who should be the head of generative AI — and what they should do

Some generative AI leaders might have a creative background; others could come from tech. Gratton said background matters less than a willingness to experiment. “You want somebody who’s got an experimental mindset, who sees this as a learning opportunity and sees it as an organizational structuring issue,” she said. “The innovation part is what’s really crucial.” ... The head of AI could encourage use of the technology to help with managing employees, Gratton said. This encompasses three key areas: Talent development -  Companies can use chatbots and other tools to recruit people and help them manage their careers. Productivity -  AI can be used to create assessments, give feedback, manage collaboration, and provide skills training. Change management - This includes both internal and external knowledge management. “We have so much knowledge in our organizations … but we don’t know how to find it,” Gratton said. “And it seems to me that this is an area that we’re really focusing on in terms of generative AI.” ... Leaders should remember that buy-in across all career stages and skill levels is essential. Generative AI isn’t just the domain of youth.


Knowledge-Centered Design for Generative AI in Enterprise Solutions

The need for a new design pattern, specifically the Knowledge Centered Design (KCD), arises from the evolution and complexity of AI and machine learning technologies. As these technologies advance, they generate an increasing volume of knowledge and insights. The traditional Human-Centered Design (HCD) focuses on understanding users, their tasks, and environments. However, it may not be fully equipped to handle the intricate dynamics of both human-generated and AI-generated knowledge effectively. The proposed KCD extends HCD by emphasizing the life cycle of knowledge – identifying, acquiring, categorizing, extracting insights – and incorporating feedback loops for continuous improvement. It ensures that both human-based and AI-generated knowledge are effectively integrated into the design process to enhance user experience and productivity. ... The knowledge life cycle process, feedback loop process, and integral components of the KCD pattern, serve as starting baselines that each enterprise can adapt and adjust according to their specific business needs and institutional culture. 


Creating a Data Monetization Strategy

Monetizing customer data involves implementing effective strategies and adhering to best practices to maximize its value. One key approach is to ensure data privacy and security, as customers are increasingly concerned about the usage of their personal information. Companies must establish robust data protection measures, comply with regulations such as GDPR or CCPA, and obtain explicit consent for data collection and utilization. Another strategy is to leverage advanced analytics techniques to derive valuable insights from customer data. By employing ML algorithms, predictive modeling, and artificial intelligence, businesses can uncover patterns, preferences, and trends. ... Blockchain technology is revolutionizing how data is monetized by enhancing security and trust in the digital ecosystem. Blockchain, a decentralized and immutable ledger, provides a robust infrastructure for securely storing and transferring data, making it an ideal solution for data monetization. Additionally, every transaction recorded on the blockchain is encrypted and linked to previous transactions through cryptographic hash functions, further safeguarding the integrity of the data. 



Quote for the day:

"It is during our darkest moments that we must focus to see the light." -- Aristotle Onassis