Daily Tech Digest - June 17, 2023

Borderless Data vs. Data Sovereignty: Can They Co-Exist?

Businesses have long understood that data sharing has limits (or borders). Legal separations keep data from various subsidiaries distinct or limit sharing between partners to specific data types. Multi-tenant software applications often require logical partitions to keep customer data private. What is rapidly changing are new data sovereignty laws, often cloaked as "data privacy" regulations, that enforce geographic boundaries on where data is processed and stored. Businesses must comply with the laws of each country where they operate, and data sovereignty presents a clear compliance challenge as companies hurry to rethink how and where they safely acquire personal data to share and protect. Countries enacting regulations keeping personal data inside their borders may deem their citizens' data of strategic national importance. More commonly, it's an enforcement mechanism that acknowledges personal data as an asset owned by individuals that businesses must use and share according to that country's laws. Recent data sovereignty requirements cannot be easily bypassed or pushed to the consumer's consent.


All change: The new era of perpetual organizational upheaval

With upsets coming from all directions—whether they be supply chain disruptions, surging inflation, or spikes in interest rates and energy prices—companies need to focus on being prepared and ready to act at all times. The key is not just to bounce out of crises, but to bounce forward—landing on their feet relatively unscathed and racing ahead with new energy. ... But it’s raising huge questions: How can companies provide structure and support to all employees regardless of where they are? How do they address the potential risks to company culture and the sense of belonging, as well as to collaboration and innovation? The pandemic exacerbated other trends, including the continuing skills mismatch in the labor market, which the onward march of technology is intensifying. It threw a harsh light on the challenge of workplace motivation—sometimes referred to as the “great attrition,” with workers leaving their jobs, or quiet quitting, essentially downscaling their efforts on the job.


A guide to becoming a Chief Information Security Officer: Steps and strategies

The technical skills are a must-have. Know all about network security, cloud security, identity access management, adopting and adapting infrastructure, along with tools and technologies that allow for the preservation of organizational data privacy, integrity and computing availability. Security engineers who are interested in becoming CISOs often focus on problem hunting. CISOs need to not only be able to find problems, but to identify problems and vulnerabilities that aren’t apparent to those around them. Learning to ask the right kinds of questions and thinking about issues in unconventional ways take time and practice. CISOs need to continuously update their mental models when it comes to thinking about cyber security. The mental model required for on-premise cyber security implementation is different from that required for the cloud. As an increasing number of automation and AI-based tools emerge, mental models will again need to be retrofitted. Many aspiring CISOs sell their technical credentials to prospective employers. This is important. 


TinyML computer vision is turning into reality with microNPUs (µNPUs)

Digital image processing—as it used to be called—is used for applications ranging from semiconductor manufacturing and inspection to advanced driver assistance systems (ADAS) features such as lane-departure warning and blind-spot detection, to image beautification and manipulation on mobile devices. And looking ahead, CV technology at the edge is enabling the next level of human machine interfaces (HMIs). HMIs have evolved significantly in the last decade. On top of traditional interfaces like the keyboard and mouse, we have now touch displays, fingerprint readers, facial recognition systems, and voice command capabilities. While clearly improving the user experience, these methods have one other attribute in common—they all react to user actions. The next level of HMI will be devices that understand users and their environment via contextual awareness. Context-aware devices sense not only their users, but also the environment in which they are operating, all in order to make better decisions toward more useful automated interactions. 


Intel Announces Release of ‘Tunnel Falls,’ 12-Qubit Silicon Chip

“Tunnel Falls is Intel’s most advanced silicon spin qubit chip to date and draws upon the company’s decades of transistor design and manufacturing expertise. The release of the new chip is the next step in Intel’s long-term strategy to build a full-stack commercial quantum computing system. While there are still fundamental questions and challenges that must be solved along the path to a fault-tolerant quantum computer, the academic community can now explore this technology and accelerate research development.” — Jim Clarke, director of Quantum Hardware, Intel Why It Matters: Currently, academic institutions don’t have high-volume manufacturing fabrication equipment like Intel. With Tunnel Falls, researchers can immediately begin working on experiments and research instead of trying to fabricate their own devices. As a result, a wider range of experiments become possible, including learning more about the fundamentals of qubits and quantum dots and developing new techniques for working with devices with multiple qubits.


What bank leaders should know about AI in financial services

While this technology has many exciting potential use cases, so much is still unknown. Many of Finastra’s customers, whose job it is to be risk-conscious, have questions about the risks AI presents. And indeed, many in the financial services industry are already moving to restrict use of ChatGPT among employees. Based on our experience as a provider to banks, Finastra is focused on a number of key risks bank leaders should know about. Data integrity is table stakes in financial services. Customers trust their banks to keep their personal data safe. However, at this stage, it’s not clear what ChatGPT does with the data it receives. This begs the even more concerning question: Could ChatGPT generate a response that shares sensitive customer data? With the old-style chatbots, questions and answers are predefined, governing what’s being returned. But what is asked and returned with new LLMs may prove difficult to control. This is a top consideration bank leaders must weigh and keep a close pulse on. Ensuring fairness and lack of bias is another critical consideration. 


Are public or proprietary generative AI solutions right for your business?

Internal large language models are interesting. Training on the whole internet has benefits and risks — not everyone can afford to do that or even wants to do it. I’ve been struck by how far you can get on a big pre-trained model with fine tuning or prompt engineering. For smaller players, there will be a lot of uses of the stuff [AI] that’s out there and reusable. I think larger players who can afford to make their own [AI] will be tempted to. If you look at, for example, AWS and Google Cloud Platform, some of this stuff feels like core infrastructure — I don’t mean what they do with AI, just what they do with hosting and server farms. It’s easy to think ‘we’re a huge company, we should make our own server farm.’ Well, our core business is agriculture or manufacturing. Maybe we should let the A-teams at Amazon and Google make it, and we pay them a few cents per terabyte of storage or compute. My guess is only the biggest tech companies over time will actually find it beneficial to maintain their own versions of these [AI]; most people will end up using a third-party service. 


Governance in the Age of Technological Innovation

To keep abreast of technological change and innovation, the board needs to ensure that its innovation and risk agendas are up-to-date, and that innovation is incorporated into the organisation’s strategy review. This may involve reviewing key performance indicators, performance measures and incentives. Within the board, the appropriate composition, culture and interactions can promote innovation. Not all board directors will have the relevant technical expertise, but more diverse boards can build collective literacy and enhance human capital in the boardroom, said De Meyer. Where necessary, committees such as scientific or innovation committees can be created to drive greater attention to these topics. In these cases, naming matters, said Janet Ang, non-executive Chair of the Institute of Systems Science in the panel discussion. For instance, referring to a committee as “Technology and Risk” instead of narrowly naming it as “IT” gives it more weight and scope. Fundamentally, boards should not only strive for conformance but also performance, urged Su-Yen Wong, Chair of the Singapore Institute of Directors. 


Can You Renegotiate Your Cloud Bill by Refusing to Pay It?

Hyperscalers in cloud continue to face questions about the cost and reliability of their services, especially in light of the brief AWS outage on June 13 that affected Southwest Airlines, McDonald’s, and The Boston Globe along with others. Further, some organizations face regulatory requirements that preclude the use of the cloud for certain datasets and transactions, Katz says. “There’s really no one-size-fits-all answer because every manufacturer, every organization, every company has different requirements.” There can be times when a cloud-first approach does not make sense for organizations. Katz says his company worked with a client whose dataset is very transactional with lots of changes and database read-writes. “We ran an assessment for them and going off to the public cloud was going to be eight times more expensive a month than keeping it on prem.” ... Much of the market is pushing toward a cloud-first world, but the economics could become challenging in the future. “At some point in time, the cost of doing business in the cloud is going to be exponentially higher, usually, than if you were to buy a depreciating asset and then kick it to the curb,” Katz says.


Red teaming can be the ground truth for CISOs and execs

What red teams can give CISOs is the cold, hard truth of how their network stacks up against threats that could be ruinous to the business. Red teams leave no stone unturned and pull on every thread until it unravels. This shines light on the vulnerabilities that will harm the finances or reputation of the business. With a red team, objective-based continuous penetration testing (led by experts that know attackers’ best tricks) can relentlessly scrutinize the attack surface to explore every avenue that could lead to a breakthrough. This proactive, “offensive security” approach will give a business the most comprehensive picture of their attack surface that money can buy, mapping out every possibility available to an attacker and how it can be remediated. It is also not limited to testing the technology stack; for businesses concerned that their employees are susceptible to social engineering attacks, red teams can emulate social engineering scenarios as part of their testing. A stringent social engineering assessment program should not be overlooked in favor of only scrutinizing weaknesses in IT infrastructure. 



Quote for the day:

"Leadership is just another word for training." -- Lance Secretan

Daily Tech Digest - June 15, 2023

The five new foundational qualities of effective leadership

Today’s leaders have to be able to establish a compelling destination and then navigate through the fog with a compass. “You have to be ready to make a decision today, realizing that you may get new data tomorrow that means you have to reverse the decision you just made,” a veteran CEO of a Fortune 25 company told us. “You have to have the courage to follow that new information. The job’s always been ambiguous. But the environment has never been this fluid.” Boards and CEOs expect succession candidates to be adept at providing direction and key performance indicators that will signal whether course adjustments are necessary. “We’re living in an age with many more discontinuities than we had a generation or two ago,” said Mark Thompson, former CEO of the New York Times Company and now board chairman of Ancestry. “It’s not about trying to find the perfect strategies. It’s more about helping organizations to be more open, flexible, and adaptable to change.” This shift demands a more dynamic, individual leadership approach, as well as a reimagining of basic organizational processes. 


5 best practices to ensure the security of third-party APIs

Maintaining an API inventory that automatically updates as code changes is an instrumental first step for an API security program, says Jacob Garrison, a security researcher at Bionic. This is an instrumental first step for an API security program; it should distinguish between first-party and third-party APIs. And it encourages continuous monitoring for shadow IT — APIs brought on board without notifying the security team. “To ensure your inventory is robust and actionable, you should track which APIs transmit business-critical information, such as personally identifiable information and payment card data,” he says. An API inventory is complementary to third-party risk management, according to Garrison. When developers utilize third-party APIs, it’s worthwhile to consider risk assessments of the vendors themselves. ... Frank Catucci, chief technology and head of security research for Invicti Security, agrees that including an inventory of third-party APIs is critical. "You need to have third-party APIs be part of your overall API inventory and you have to look at them as assets that you own, that you are responsible for," he says


Generative AI’s change management challenge

“The hardest part of AI acceptance is creating a space where employees can still add value and not feel they are competing with AI to create value,” Bellefonds added. “A lot of the work we do when it comes to change management and coaching is to help employees work with AI and at the same time, change the way they add value, so that a part of their job is taken by AI but their part refocuses on higher value-adding tasks.” Exactly how those processes are rewired and the working methods changed will vary from one enterprise to another, he said. There are other ways in which employees’ concerns about AI is unevenly distributed, too. Leaders are more likely to be optimistic, and frontline workers concerned, BCG found. And while 68% of leaders believe their companies have implemented adequate measures to ensure responsible use of AI, only 29% of their frontline employees feel that way. Despite BCG’s findings of optimism in the workforce, there’s a darker side. Over one-third of respondents think their job is likely to be eliminated by AI, and almost four-fifths want governments to step in and deliver AI-specific regulations to ensure it’s used responsibly.


As Machines Take Over — What Will It Mean to Be Human?

Biocomputing is a field of study that uses biologically-based molecules, such as DNA or proteins, to perform computational tasks. Imitating the genius of nature can completely shift the paradigm of understanding when it comes to the computation and storage of data. The field has shown promise in cryptography and drug discovery. However, biocomputers are still limited compared to non-bio computers since they aren't good at cooling themselves and doing more than two things simultaneously. Advancements in AI, however, have been booming. Since 2012, interest in AI, especially in machine learning, has been renewed, leading to a dramatic increase in funding and investment. Machine learning models ingest large amounts of data and infer patterns. More recently, generative AI has become extremely popular with the release of large AI models such as MidJourney, ChatGPT and Stable Diffusion. Generative AI is a class of AI algorithms that generate new data or content extremely similar to existing data, nearly identical to human-made data.


What is SDN and where is is going?

There are three main components to a software-defined network: controller, applications, and devices. The controller has taken over the role of the control plane on each individual network device. It populates the tables that the data planes on those devices use to do their work. There are various communication protocols that can be used for this purpose, including OpenFlow, though some vendors use proprietary protocols. Communication between the controller and devices is referred to as southbound APIs. The software controller is, in turn, managed by applications, which can fulfill any number of network administration roles, including load balancers, software-defined security services, orchestration applications, or analytics applications that keep tabs on what's going on in the network. These applications communicate with the controller (northbound APIs) through well-documented REST APIs that allow applications from different vendors to communicate with ease. 


Using Trauma-Informed Approaches in Agile Environments

Software is, by definition, very abstract. For this reason, we naturally tend to be in our heads and thoughts most of the time while at work. However, a more trauma-informed approach requires us to pay more attention to our physical state and not just to our brain and cognition. Our body and its sensations are giving us many signs, vital not just to our well-being but also to our productivity and ability to cognitively understand each other and adapt to changes. Paradoxically, in the end, paying more attention to our physical and emotional state gives us more cognitive resources to do our work. Noticing our bodily sensations at the moment, like breath or muscle tension in a particular area, can be a first step to getting out of a traumatic pattern. And a generally higher level of body awareness can help us fall less into such patterns in the first place. Simplified - our body awareness anchors us in the here and now, making it easier for us to recognize past patterns as inadequate for the current situation.


How Pyramid Thinking Can Revolutionize Your Data Strategy

Before devising a corporate data strategy, the main things you need to know are the strategy and objectives of your organization as a whole. Data can be a truly transformative tool, but even the sharpest knife needs to be used accurately to get the best results -- which is why you need to know the end goal before you can understand how data can help you achieve it. This end goal forms the very peak of the pyramid and it is by looking downwards from it that you can understand the role that data can play. For organizations struggling to pinpoint that goal (as oftentimes happens when the business strategy isn’t well-defined and documented), it is worth considering key business problems and the consequent opportunities for improvement. ... Identifying business goals gives you the basis upon which to build your data strategy, and with that you can begin to be more specific about the change you are looking to make. An actionable and measurable formula helps you shape those changes with clarity, such as “we want to do x by measuring/tracking/analyzing y in order to do z.”


Network spending priorities for second-half 2023

Security is the area where most users expect to spend more, but at the same time an area where they believe their spending is most likely to be sub-optimal. Three-quarters of buyers think they already spend too much on security because they’ve layered things on without considering the whole picture. You hear terms like “holistic approach” or “rethinking” a lot in their comments, but at the same time, less than an eighth of the users expect to redo their security strategies in any way.  ... The reasons for the seemingly mindless AI enthusiasm is a simple reversal of an old saying: “Where there’s hope, there’s life.” AI could (theoretically) reduce operator errors. It could (hopefully) improve network capacity planning. It could (presumably) help secure applications and data and spot malefactors. All these things are recurring problems that seem to defy solution, and AI offers a hope that a solution might be near at hand. What’s not to love, provisionally of course.


Biodiversity Means Business

Technology can play a key role in navigating biodiversity issues. Predictive analytics, machine learning, digital twins, blockchain and the Internet of Things can deliver insight, visibility and measurability into sourcing, supply chains and environmental impacts. However, Katic emphasizes that these tools must be used to drive real change. “They must support a paradigm shift to new, sustainable models of development, rather than entrenching business as usual. They must deliver enhanced transparency and accountability,” she says. Ultimately, companies must imbed biodiversity deep into their business strategies and daily operations, Katic says. This includes the use of science based methods that revolve around the UN’s Sustainable Development Goals and its Global Biodiversity Framework. It can also incorporate tools such as the S&P’s scoring system, part of its UN-linked GlobalSustainable1 initiative, which provides dependency scores, ecosystem footprint insights, and other biodiversity data that can guide decision-making. In addition, the SBTN framework can serve as a valuable resource. More than 200 organizations helped shape the initial set of methods, tools, and guidance.


5 roadblocks to Rust adoption in embedded systems

Rust is not a trivial language to learn. While it does share common ideas and concepts with many of the languages that came before it, including C, the learning curve is steeper. When a company looks to adopt a new language, they hire engineers who already know the technology or are forced to train their team. Teams interested in using Rust for embedded will find themselves in a small, niche community. Within this community, not many qualified embedded software engineers know Rust. That means paying a premium for the few developers who know Rust or investing in training the existing internal team. Training a team to use Rust isn’t a bad idea. Every company and developer should be investing in themselves constantly. Our field changes so rapidly that you’ll quickly get left behind if you don’t. However, switching from one programming language to another must provide a return on investment for the company. Especially when switching to an immature language like Rust. 



Quote for the day:

"Don't focus so much on who is following you, that you forget to lead." -- E'yen A. Gardner

Daily Tech Digest - June 14, 2023

Malicious hackers are weaponizing generative AI

The headline here is not that this new threat exists; it was only a matter of time before threats powered by generative AI power showed up. There must be some better ways to fight these types of threats that are likely to become more common as bad actors learn to leverage generative AI as an effective weapon. If we hope to stay ahead, we will need to use generative AI as a defensive mechanism. This means a shift from being reactive (the typical enterprise approach today), to being proactive using tactics such as observability and AI-powered security systems. The challenge is that cloud security and devsecops pros must step up their game in order to keep out of the 24-hour news cycles. This means increasing investments in security at a time when many IT budgets are being downsized. If there is no active response to managing these emerging risks, you may have to price in the cost and impact of a significant breach, because you’re likely to experience one. Of course, it’s the job of security pros to scare you into spending more on security or else the worst will likely happen.


Avoiding the Pain of a ‘Resume-Driven Architecture’

A resume-driven architecture occurs when the interests of developers lead them to designs that no longer align with maximized impacts and outcomes for the organization. Often, the developer clings to a technology that provides them a greater level of control and, at least initially, a higher salary. Meanwhile, the organization gets an architecture that only a handful of people know how to manage and maintain, limiting the available talent pool and hindering future innovation. ... There’s no sense in investing resources in a bespoke architecture if it’s not providing you with any differentiation—especially when competitors are achieving the same outcome with fewer resources. Moreover, getting stuck in a Stage Two mindset when the field moves on to Stage Three (or, worse, Stage Four) and cuts you off from the next wave of innovation. Subsequent technology breakthroughs often build on top of—and interoperate with—the previous technology layers. If you’re stuck with a custom architecture when the industry has moved on, you can miss out on the next wave of innovation and fall further behind competitors.


In the Great Microservices Debate, Value Eats Size for Lunch

A key criterion for a service to be standing alone as a separate code base and a separately deployable entity is that it should provide some value to the users — ideally the end users of the application. A useful heuristic to determine whether or not a service satisfies this criterion is to think about whether most enhancements to the service would result in benefits perceivable by the user. If in a vast majority of updates the service can only provide such user benefit by having to also get other services to release enhancements, then the service has failed the criterion. ... Providing value is also about the cost efficiency of designing as multiple services versus combining as a single service. One such aspect that was highlighted in the Prime Video case was chatty network calls. This could be a double whammy because it not only results in additional latency before a response goes back to the user, but it might also increase your bandwidth costs. This would be more problematic if you have large or several payloads moving around between services across network boundaries. 


Enhancing Code Reviews with Conventional Comments

In software development, code reviews are a vital practice that ensures code quality, promotes consistency, and fosters knowledge sharing. Yet, at times, they can drive me absolutely bananas! However, the effectiveness of code reviews is contingent on clear, concise communication. This is where Conventional Comments play a pivotal role. Conventional Comments provide a standardized method of delivering and receiving feedback during code reviews, reducing misunderstandings and promoting more efficient discussions. Conventional Comments are a structured commenting system for code reviews and other forms of technical dialogue. They establish a set of predefined labels, such as nitpick, issue, suggestion, praise, question, thought, and notably, non-blocking. Each label corresponds to a specific comment type and expected response. ... By standardizing labels and formats, Conventional Comments enhance the clarity of comments, eliminating vague language and misunderstandings, ensuring all participants understand the intent and meaning of the comments.


How the modern CIO grapples with legacy IT

When reviewing products and services, Abernathy considers whether a technology still fits into requirements for simplicity of geographies, designs, platforms, applications, and equipment. “Driving for simplicity is of paramount importance because it increases quality, stability, value, agility, talent engagement and security,” she says. Other red flags for replacement include point solutions, duplicative solutions, or technologies that become very challenging because of unreasonable pricing models, inadequate support or instability. In some ways, moving to SaaS-based applications makes the review process simpler because decisions as to whether and when to update and refactor are up to the provider, Ivy-Rosser says. But while technology change decisions are the responsibility of the provider, if you’re modernizing in a hybrid world, you need to make sure your data is ready to move and that any changes don’t create privacy issues. With SaaS, the review should take a hard look at the issues surrounding ownership and control.


The psychological impact of phishing attacks on your employees

The aftermath of a successful phishing attack can be emotionally draining, leaving people feeling embarrassed and ashamed. The fear of accidentally clicking a phishing email can affect a person’s performance and productivity at work. Even simulated phishing attacks can cause stress when employees are lured with fake promises of bonuses or freebies. Furthermore, when phishing emails repeatedly get through security measures and are not neutralized, employees may view these as safe and click on them. This could ultimately lead to employees losing faith in their employer’s ability to protect them. ... Organizations owe it to their employees to be proactive. To ensure employees are protected, they should implement advanced technology that uses Artificial Intelligence and Machine Learning models, such as Natural Language Processing (NLP) and Natural Language Understanding. These tools can detect even the most advanced phishing attempts and will serve as a safety net.


Cyber liability insurance vs. data breach insurance: What's the difference?

Understanding the distinction is important, as cyber insurance is becoming an integral part of the security landscape. Many companies may have no choice but to find insurance as more organizations are requiring that their business partners have cyber coverage. Many traditional business insurance policies will simply not cover cyber incidents, considering them outside the scope of the agreement, which is why cyber insurance has become a separate form of protection. It’s also important to note that getting insurance isn’t guaranteed — insurers are increasingly asking for more proof that strong cybersecurity strategies are in place before agreeing to provide coverage. Many companies may have no choice but to meet such terms. Put simply, cyber liability insurance refers to coverage for third-party claims asserted against a company stemming from a network security event or data breach. Data breach insurance, on the other hand, refers to coverage for first-party losses incurred by the insured organization that has suffered a loss of data.
These leaders recognize that transformation investments remain critical to any business, and they plan to emerge from these volatile times armed with new business models and revenue streams. In short, they plan to continue winning through transformation, and they are laser-focused about how they will do it. You might even say they’re “outcomes obsessed.” ... Remember, your goal is to prune the tree so it can thrive—not just to go around sawing off branches. Any cuts must set up individuals, teams, and departments for long-term success, despite the short-term pain. One way I’ve seen successful leaders do this is by taking the choices they are considering (both cutting investments and expanding them) and mapping them out in terms of their expected financial and nonfinancial impact ... Top-performing companies look beyond functional excellence, and instead aim for enterprise-level reinvention that extends across the company’s business, operating, and technology models. You should too. These transformations enable you to strengthen ecosystems, close capability gaps, and better chart your future revenue streams. 


Don't Let Age Mothball Your IT Career

Age discrimination is a significant concern in the IT industry, Schneer says. “Some companies may prioritize younger workers who are perceived to be more tech-savvy and adaptable,” she notes. “However, experienced professionals bring valuable skills and knowledge that can be an asset to any organization.” Weitzel observes that it's difficult to know how prevalent age discrimination is in any industry. “But applicants can be proactive in combatting any false assumptions by showcasing upfront the current skills and recent experience that employers are seeking.” Age discrimination may be more prevalent in certain IT fields, such as software development or web design, where rapid advancements in technology can make older professionals feel less relevant, Schneer says. “However, roles that require extensive experience and expertise, such as IT management or cybersecurity, may be less susceptible to age bias.” When encountering suspected age bias, senior IT workers should document any incidents or patterns of behavior that suggest discrimination, Schneer advises.


Thinking Deductively to Understand Complex Software Systems

The main goal is to think through the role of tests in helping you understand complex code, especially in cases where you are starting from a position of unfamiliarity with the code base. I think most of us would agree that tests allow us to automate the process of answering a question like "Is my software working right now?". Since the need to answer this question comes up all the time, at least as frequently as you deploy, it makes sense to spend time automating the process of answering it. However, even a large test suite can be a poor proxy for this question since it can only ever really answer the question "Do all my tests pass?". Fortunately, tests can be useful in helping us answer a larger range of questions. In some cases they allow us to dynamically analyse code, enabling us to glean a genuine understanding of how complex systems operate, that might otherwise be hard won.



Quote for the day:

"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche

Daily Tech Digest - June 13, 2023

AI and tech innovation, economic pressures increase identity attack surface

In the new attack observed by Microsoft, the attackers, which the company track under the temporary Storm-1167 moniker, used a custom phishing toolkit they developed themselves and which uses an indirect proxy method. This means the phishing page set up by the attackers does not serve any content from the real log-in page but rather mimics it as a stand-alone page fully under attackers' control. When the victim interacts with the phishing page, the attackers initiate a login session with the real website using the victim-provided credentials and then ask for the MFA code from the victim using a fake prompt. If the code is provided, the attackers use it for their own login session and are issued the session cookie directly. The victim is then redirected to a fake page. This is more in line with traditional phishing attacks. "In this AitM attack with indirect proxy method, since the phishing website is set up by the attackers, they have more control to modify the displayed content according to the scenario," the Microsoft researchers said.


Revolutionizing DevOps With Low-Code/No-Code Platforms

With non-IT professionals developing applications, there is a higher risk of introducing vulnerabilities that could compromise the security of the application and the organization. Additionally, the lack of oversight and governance could lead to poor coding practices and technical debt. For instance, the use of new-generation iPaaS platforms by citizen integrators has made it difficult for security leaders to have full visibility into the organization’s valuable assets. Attackers are aware of this and have already taken advantage of improperly secured app-to-app connections in recent supply chain attacks, such as those experienced by Microsoft and GitHub. ... As organizations try to integrate low-code and no-code applications with legacy systems or other third-party applications, technical challenges can arise. For example, if an organization wants to integrate a low-code application with an existing ERP system, it may face challenges in terms of data mapping and synchronization. Some low-code and no-code applications are built to export data and share it well, but when it comes to integrating event triggers, business logic, or workflows, these software solutions hit limits. 


Rethinking AI benchmarks: A new paper challenges the status quo of evaluating AI

One of the key problems that Burnell and his co-authors point out is the use of aggregate metrics that summarize an AI system’s overall performance on a category of tasks such as math, reasoning or image classification. Aggregate metrics are convenient because of their simplicity. But the convenience comes at the cost of transparency and lack of detail on some of the nuances of the AI system’s performance on critical tasks. “If you have data from dozens of tasks and maybe thousands of individual instances of each task, it’s not always easy to interpret and communicate those data. Aggregate metrics allow you to communicate the results in a simple, intuitive way that readers, reviewers, or — as we’re seeing now — customers can quickly understand,” Burnell said. “The problem is that this simplification can hide really important patterns in the data that could indicate potential biases, safety concerns, or just help us learn more about how the system works, because we can’t tell where a system is failing.”


A Practical Guide for Container Security

Developers and DevOps teams have embraced the use of containers for application deployment. In a report, Gartner stated, "By 2025, over 85% of organizations worldwide will be running containerized applications in production, a significant increase from less than 35% in 2019." On the flip side, various statistics indicate that the popularity of containers has also made them a target for cybercriminals who have been successful in exploiting them. According to a survey released in a 2023 State of Kubernetes security report by Red Hat, 67% of respondents stated that security was their primary concern when adopting containerization. Additionally, 37% reported that they had suffered revenue or customer loss due to a container or Kubernetes security incident. These data points emphasize the significance of container security, making it a critical and pressing topic for discussion among organizations that are currently using or planning to adopt containerized applications.


6 finops best practices to reduce cloud costs

Centralizing cloud costs from public clouds and data center infrastructure is a key finops concern. The first thing finops does is to create a single-pane view of consumption, which enables cost forecasting. Finops platforms can also centralize operations like shutting down underutilized resources or predicting when to shift off higher-priced reserved cloud instances. Platforms like Apptio, CloudZero, HCMX FinOps Express, and others can help with shift-left cloud cost optimizations. They also provide tools to catalog and select approved cloud-native stacks for new projects. ... “Today’s developers now have a choice between monolithic cloud infrastructure that locks them in and choosing to assemble cloud infrastructure from modern, modular IaaS and PaaS service providers,” says Kevin Cochrane, chief marketing officer of Vultr. “By choosing the latter, they can speed time to production, streamline operations, and manage cloud costs by only paying for the capacity they need.” As an example, a low-usage application may be less expensive to set up, run, and manage on AWS Lambda with a database on AWS RDS, rather than running it on AWS EC2 reserved instances.


Artificial Intelligence: A Board of Directors Challenge – Part II

It is essential for organizations to dedicate time and effort to consider the potential unintended consequences or “unknown unknowns” of AI deployments. This will prevent adverse outcomes that may arise if AI is deployed without proper consideration. To achieve this, it is necessary to understand the Rumsfeld Knowledge Matrix. The Rumsfeld Knowledge Matrix is a conceptual framework introduced by Donald Rumsfeld, the former United States Secretary of Defense, to categorize and analyze knowledge and information based on different levels of certainty and awareness. The matrix consists of four quadrants: Known knowns: These are things that we know and are aware of. They represent information that is well understood and can be easily articulated. I call these “Facts.” Known unknowns: These are things that we know we don’t know. In other words, there are gaps in our knowledge or information which we are aware of and recognize as areas where further research or investigation is needed. We need to ask These ” Questions “


How to achieve cyber resilience?

Instead of relegating security development to a forgettable annual calendar reminder, a continuous approach must keep security at the forefront of mind throughout the year. Security threats also need to be brought to life with realistic simulation exercises. This approach will provide a much more engaging experience for participants and a far more accurate indication of their abilities. Real-life exercises give far more insight into an individual’s mindset and potential than a certification’s often rote, static nature. Security teams must be ready to respond rapidly and confidently to the latest emerging threats, aligned with industry best practices. They must have the right skills, from closing off newly discovered zero days, to mitigating serious incoming threats like attacks exploiting Log4Shell. But they must also be able to apply them calmly and in control even if they face a looming crisis. This capability can only be developed through continuous exercise.


The IT talent flight risk is real: Are return-to-office mandates the right solution?

Most workers require location flexibility when considering a job change. In addition, most workers in an IT function would only consider a new job or position that allows them to work from a location of their choosing. Requiring employees to return fully on-site is also a risk to DEI. Underrepresented groups of talent have seen improvements in how they work since being allowed more flexibility. For example, most women who were fully on-site prior to the pandemic, but have been remote since, report their expectations for working flexibly have increased since the beginning of the pandemic. Employees with a disability have also found a vast improvement to the quality of their work experience. Since the pandemic, Gartner research shows that knowledge workers with a disability have found the extent to which their working environment helps them be productive has improved. In a hybrid environment for this population, perceptions of equity have also improved, as they have experienced higher levels of respect and greater access to managers.


Common Cybersecurity Risks to ICS/OT Systems

Protecting ICS/OT systems from cyberthreats is crucial for ensuring the resilience of critical infrastructure. Recent cyberattacks on ICS/OT systems have highlighted the potential impact of these attacks on critical infrastructure and the need for organizations to prioritize cybersecurity for their ICS/OT systems. By being aware of common cybersecurity risks and taking proactive steps to mitigate them, organizations can protect their ICS/OT systems and maintain operational resilience. The above-mentioned incidents demonstrate that cyberattacks on ICS/OT systems can cause physical harm, financial losses and public safety risks. Organizations must protect their ICS/OT systems from cyberthreats, such as conducting regular vulnerability assessments, implementing network segmentation and providing employee training on cybersecurity best practices. Compliance with relevant regulations and standards and collaboration between IT and OT teams can also help mitigate cybersecurity risks to ICS/OT systems.


10 emerging innovations that could redefine IT

The most common paradigm for computation has been digital hardware built of transistors that have two states: on and off. Now some AI architects are eyeing the long-forgotten model of analog computation where values are expressed as voltages or currents. Instead of just two states, these can have almost an infinite number of values, or at least as much as the precision of the system can measure accurately. The fascination in the idea comes from the observation that AI models don’t need the same kind of precision as, say, bank ledgers. If some of the billions of parameters in a model drift by 1%, 10% or even more, the others will compensate and the model will often still be just as accurate overall. ... The IT department has a big role in this debate as it tests and deploys the second and third generation of collaboration tools. Basic video chatting is being replaced by more purpose-built tools for enabling standup meetings, casual discussions, and full-blown multi-day conferences. The debate is not just technical. Some of the decisions are being swayed by the investment that the company has made in commercial office space.



Quote for the day:

"When you accept a leadership role, you take on extra responsibility for your actions toward others." -- Kelley Armstrong

Daily Tech Digest - June 12, 2023

Cloud-Focused Attacks Growing More Frequent, More Brazen

One key finding is that hackers are becoming more adept — and more motivated — in targeting enterprise cloud environments through a growing range of tactics, techniques and procedures. These include deploying command-and-control channels on top of existing cloud services, achieving privilege escalation, and moving laterally within an environment after gaining initial access. ... While attack vectors and methods are increasingly varied, they often rely on some common denominators, including the oldest one around: human error. For example, 38% of observed cloud environments were running with insecure default settings from the cloud service provider. Indeed, cloud misconfigurations are one of the major sources of breaches. Similarly, identity access management (IAM) is another huge area of risk rife with human error. In two out of three cloud security incidents observed by CrowdStrike, IAM credentials were found to be over-permissioned, meaning the user had higher levels of privileges than necessary.


Enterprise Architecture Maturity Model – a Roadmap for a Successful Enterprise

Assessment is the evaluation of the EA practice against the reference model. It determines the level at which the organization currently stands. It indicates the organization’s maturity in the area concerned, and the practices on which the organization needs to focus to see the greatest improvement and the highest return on investment. ... Development of the EA is an ongoing process and cannot be delivered overnight. An organization must patiently work to nurture and improve upon its EA program until architectural processes and standards become second nature and the architecture framework and the architecture blueprint become self-renewing. Maturity assessment is a standard business tool to understand the maturity level of the organization. An EAM Assessment Framework comprises a maturity model with different maturity levels and a set of elements, which are to be assessed, methodology and a toolkit for assessment (questionnaires, tools, etc.). The outcome is a detailed assessment report, which describes the maturity of the Organization, as well as the maturity against each of the architectural elements.


European Commission Wants Labels on AI-Generated Content -- Now

The regulatory push might lead to deeper scrutiny of where AI-generated content comes from, down to its data sources. Jan Ulrych, vice president of research and education at Manta, favors the efforts the EU is taking to regulate this space. Manta is a provider of a data lineage platform that offers visibility to data flows, and the company sees data lineage as a way to fact-check AI content. Ulrych says when it comes to news content, there does not seem to be an effective method in place yet to validate or make sources transparent enough for fact-checking in real-time, especially with the AI’s ability to spawn content. “AI sped up this process by making it possible for anyone to generate news,” he says. It is almost a given that generative AI will not disappear because of regulations or public outcry, but Ulrych sees the possibility of self-regulation among vendors along with government guardrails as healthy steps. “I would hope, to a large degree, the vendors themselves would invest into making the data they’re providing more transparent,” he says.


Finding The Right Size of a Microservice

Determining the right level of granularity — the size of the service — is one of the many hard parts of a microservices architecture that we as developers struggle with. Granularity is not defined by the number of classes or lines of code in a service, but rather by what the service does — hence, there is this conundrum to getting service granularity right. ... Since we are living in the era of micro-services and nano-services, most development teams do mistakes by breaking services arbitrarily and ignoring the consequences that come with it. In order to find the right size, one should carry out the trade-off analysis on different parameters and make a calculated decision on the context and boundary of a microservice. ... The scope and function mainly depend on two attributes — first is cohesion, which means the degree and manner to which the operation of a particular service interrelate. The second is the overall size of a component, measured usually in terms of the number of responsibilities, the number of entry points into the service, or both.


What is Web3 decentralized cloud storage?

Web3 storage is, as the name suggests, decentralised, meaning the data is held across multiple repositories. If a government agency, or hacker, wanted to obtain confidential data, there’s no single location to raid. Unless granted the user’s keys, there’s no way to unlock data held on Web3 storage. Security and privacy are guaranteed. ‘For a company looking for resilient, low cost, and predictable storage … Web3 storage is now undeniably a viable – if still unusual – proposition’ Web3 cloud storage scales well. Local storage can run out, but with Web3 there is always room for more (even if you may have to pay to access the extra space). “It can also scale horizontally, accommodating the increasing demand for data storage without centralised bottlenecks,” says Servadei. Access speeds are acceptable. “It’s going to be slower than you’d have a normal hard-drive or CD. But it stores data the same way Amazon S3 stores data.” Decentralised storage also a more permanent way to store files. Hosting sites don’t last forever. Anyone wanting to access historic websites on Geocities or 4sites or Xanga will know the annoyance of web hosts going bust. Link rot is a curse of the internet.


To solve the cybersecurity worker gap, forget the job title and search for the skills you need

Steven Sim, CISO for a global logistics company and a member of the Emerging Trends Working Group with the IT governance association ISACA, has adopted this thinking. ... “They may not have the relevant [security] certification, but they have the domain knowledge,” he says, pointing out that OT security has some requirements that differ from IT security which makes that OT background particularly valuable on his team. Sim says he looks for “a passion and keenness to learn” in such candidates. He also looks for candidates who demonstrate ownership of their work, a high degree of integrity, a willingness to collaborate, and a “risk-based mindset.” Sim then upskills such hires by having them receive on-the-job training and earn security certifications. Moreover, he says drawing workers from OT helps create more collaboration with the function and ultimately more secure OT operations. He says that result has helped get OT leaders onboard with his recruiting efforts, adding that they see it as a “symbiotic win-win relationship.”


Innovation without disruption: virtual agents for hyper-personalized customer experience (CX)

VAs help “hold the fort” on routine calls so live agents can focus more on complicated interactions, but they’re smart enough to handle certain complexities on their own. They can effortlessly navigate topics, handle a wide range of questions, and seamlessly operate across multiple channels. The technology also grows in intelligence with use, allowing VAs to act with greater – comparably humanlike – awareness. For example, you might present a customer with a choice of channels for engagement such as chat, phone, and social media. After communicating with the customer, your VA can default to that person’s preferred channel for future conversations. ... VAs can hyper-personalize even routine interactions. Let’s say a customer initiates a chat session with a VA for resetting a forgotten password. The VA can ask the customer if they would like to switch to text messaging for a more effective multimedia experience. If the customer accepts, the chat session will end and the VA will seamlessly switch to SMS.


Building a secure coding philosophy

Discussing secure coding, Læarsson says: “From criteria’s definition through coding and release – our quality assurance processes include both automated and manual testing, which helps us ensure that we push and maintain high standards with every application and update we do. The software we develop is tested for both functional and structural quality standards – from how effectively applications adhere to the core design specifications, to whether it meets all security, accessibility, scalability and reliability standards.” Peer review is used to run an in-depth technical and logical line-by-line review of code to ensure its quality. Within the National Digitalisation Programme, Læarsson says: “Our low-code development projects are divided into scrum teams, where each team creates stories and tasks for each sprint and defines specific criteria for these.” These stories enable people to understand the role of a particular piece of software functionality. “When stories are done, they are tested by the same analysts who have specified the stories. 


UK Takes the First Step to Stop Authorized Payment Scams

The U.K.'s Payment Systems Regulator said fighting APP scams requires taking an ecosystem-level approach. Fraudsters are specifically targeting faster payment services because of the speed of transactions, so financial institutions need to be confident that they can authorize payments between each other, no matter what the channel. Consumers and businesses have always trusted banks to provide expertise and capabilities they do not possess themselves. They want to know that their bank is doing everything it can to protect them from scammers. Ken Palla, retired director of MUFG Bank, said the regulator has put together a very detailed and complete document. "It is clear what is included in the policy statement and what is excluded. The PSR wants payment firms to take responsibility for protecting their customers at the point a payment is made. In doing so, it expects the new reimbursement requirement to lead firms to innovate and develop effective, data-driven interventions to change customer behavior."


Building a culture of security awareness in healthcare begins with leadership

A well-tailored security program must be just that: tailored. Many security legal frameworks are moving from specificity in controls towards a discretionary-based approach. This “discretionary” standard is interpreted by governing bodies that interpret the leading-edge developments in the industry. An organization must trace what data is stored or processed and ensure security controls are mapped internally to an organization and externally across vendors. Healthcare organizations must dedicate time to ensure appropriate administrative, technical, and physical controls are in place at the organization and its vendors to protect data stored and processed. The saying “one size fits all” is never true for how a security program is administered and applied in the healthcare technology industry, or any other industry. However, the fundamental principles are the same: understanding what data is processed by an organization, identifying true risks (internal and external) to the data, evaluating the impacts of those risks, and whether existing controls are adequate to reduce those risks to an acceptable standard.



Quote for the day:

"The key to being a good manager is keeping the people who hate me away from those who are still undecided." -- Casey Stengel

Daily Tech Digest - June 11, 2023

Tips Every CFO Should Consider For Implementing Tech Solutions

Conduct a cost assessment to pinpoint areas where tech upgrades may be needed and determine if these upgrades will add value to your financial operations. Remember, newer doesn’t necessarily mean better. Therefore, you must invest in tech solutions and upgrades that improve efficiency across the board. By taking the initiative and identifying areas where tech solutions can solve specific pain points, CFOs can help ensure a seamless transition when implementing new technology. ... While many organizations today jump at the opportunity to implement updated solutions to replace legacy systems, an overhaul doesn’t have to be made just because new technologies become available. ... The key is fully understanding why you’re switching to and implementing new technology. Just because certain tasks and processes can be done using advanced tech tools doesn’t necessarily mean your company needs new software.


The power of data management in driving business growth

Effective data management means business leaders can stay abreast of the ever-surging tide of data, as well as deploying new services quickly, and scaling faster. It can deliver insights which lead to new business streams or even the reinvention of the entire company. Data management comes in multiple forms, encompassing both hardware and software. Solutions include unified storage, which enables organisations to run and manage files and applications from a single device, and storage-area networks (SANs), offering network access to storage devices. ... As well as data management, the Data Leaders thrive in two other key areas: data analytics and data security. These three elements are interdependent. Data management naturally works hand-in-hand with data analytics, and data security is increasingly important as business leaders hope to share data with partners securely. It’s impossible for leaders to thrive when it comes to data management if they haven’t harnessed data security, or to adopt data analytics without mastering data management. 


Zero Trust: Beyond the Smoke and Mirrors

Despite misleading marketing, a lack of transparency into the available technologies, the limited scope of the technologies themselves, mounting privacy concerns, as well as a complete question mark when it comes to price and deployment, trust in zero trust remains. Organizations know they need to embrace it– and preferably yesterday. ... Despite this enhanced savviness and market maturity around zero trust, major barriers to implementation remain. These include:Damn you, marketers. Some vendors may use misleading marketing tactics to promote their zero-trust solutions, overstating their capabilities or making false claims about their performance. See through the noise the best you can. Most tools let you test things out first. Take vendors up on that. What the hell does this cost? Implementing zero trust security solutions can be expensive, especially for organizations with large IT infrastructures. Chances are, the more devices, networking gear, locations, and compliance standards you need to adhere to…the more this will cost. Complexity is almost always guaranteed. Zero trust can also be complex to deploy, especially across distributed, multi-vendor networks.


Technical Debt is Inevitable. Here’s How to Manage It

Technical debt is a threat to innovation, so how can we mitigate it? Well, if you don’t already do so, it’s a good idea to build technical debt into your budgeting, planning and ongoing operations, said Orlandini. “You have to manage it, expect it and be responsible with your technical stacks in the same way you are responsible with your financial stacks,” he said. Here are a few other ways to manage the debt you have and avoid accumulating more. Consider using AI to refactor legacy code. Generative AI could be leveraged to reactor legacy code into more modern programming languages. This could help automatically convert PEARL code, for instance, into JavaScript. Today’s large language models (LLMs) could help solve many of today’s problems. However, since they are built on a pre-existing body of work, they will use less trendy languages and might cause some technical debt in the process, cautioned Orlandini. Don’t over-rely on new DevOps processes as a cure-all. DevOps can accelerate the time to release features, but it does not, by its nature, eliminate technology changes, said Orlandini.


Cloud repatriation and the death of cloud-only

IT analyst firm IDC told us that its surveys show repatriation as a steady trend ‘essentially as soon as the public cloud became mainstream,’ with around 70 to 80 percent of companies repatriating at least some data back from public cloud each year. “The cloud-first, cloud-only approach is still a thing, but I think it's becoming a less prevalent approach,” says Natalya Yezhkova, research vice president within IDC's Enterprise Infrastructure Practice. “Some organizations have this cloud-only approach, which is okay if you're a small company. If you're a startup and you don't have any IT professionals on your team it can be a great solution.” While it may be common to move some workloads back, it’s important to note a wholesale withdrawal from the cloud is incredibly rare. ... “They think about public cloud as an essential element of the IT strategy, but they don’t need to put all the eggs into one basket and then suffer when something happens. Instead, they have a more balanced approach; see the pros and cons of having workloads in the public cloud vs having workloads running in dedicated environments.”


5 Ways to Implement AI During Information Risk Assessments

The problem is that there is no such thing as a perfectly secure system; there will always be vulnerabilities that an IT team is unaware of. This is why IT teams perform regular penetration tests – simulated attacks to test a system’s security. ... By turning this task over to AI, companies can run automated penetration tests at any time. These AI models can work in the background and provide immediate alerts the moment a vulnerability is found. Better still, the AI can classify vulnerabilities based on the threat level, meaning if there’s a vulnerability that could allow for a system-wide infiltration, then that vulnerability will be prioritized above lesser threats. ... AI-powered predictive analytics can be an incredibly powerful tool that allows an organization to estimate the results of a marketing campaign, a customer’s lifetime value, or the impact of a looming recession. But predictive analytics can also be used to predict the likelihood of a future data breach.


13 Cloud Computing Risks & Challenges Businesses Are Facing In These Days

Starting with one of the major findings of this report, we can see that both enterprises and small businesses cite the ability to manage cloud spend as the biggest challenge, overtaking security concerns after a decade in place one. This can be the consequence of economic volatility, where organizations keep spending and innovating with multiple cloud services to keep up with the digital world in an unstable environment. ... Proper IT governance should ensure IT assets are implemented and used according to agreed-upon policies and procedures, ensure that these assets are properly controlled and maintained, and ensure that these assets are supporting your organization’s strategy and goals. In today’s cloud-based world, IT does not always have full control over the provisioning, de-provisioning, and operations of infrastructure. This has increased the difficulty for IT to provide the governance, compliance, risks, and data quality management required. To mitigate the various risks and uncertainties in transitioning to the cloud, IT must adapt its traditional IT control processes to include the cloud. 


When are containers or serverless a red flag?

Limited use cases mean that containers and serverless technologies are well-suited for certain types of applications, such as microservices or event-driven functions. But they do not apply to everything new. Legacy applications or other traditional systems may require significant modifications or restructuring to run effectively in containers or serverless environments. Of course, you can force-fit any technology to solve any problem, and with enough time and money, it will work. However, those “solutions” will be low-value and underoptimized, driving more spending and less business value. Complexity is a common downside of most new technology trends. Container and serverless platforms introduce additional complexity that the teams building and operating these cloud-based systems must deal with. Complexity usually means increased development and maintenance costs, less value, and perhaps unexpected security and performance problems. This is on top of the fact that they just cost more to build, deploy, and operate.


Vector Databases: What Devs Need to Know about How They Work

Unsurprisingly, a vector database deals with vector embeddings. We can already perceive that dealing with vectors is not going to be the same as just dealing with scalar quantities. The queries we deal with in traditional relational tables normally match values in a given row exactly. A vector database interrogates the same space as the model which generated the embeddings. The aim is usually to find similar vectors. So initially, we add the generated vector embeddings into the database. As the results are not exact matches, there is a natural trade-off between accuracy and speed. And this is where the individual vendors make their pitch. Like traditional databases, there is also some work to be done on indexing vectors for efficiency, and post-processing to impose an order on results. Indexing is a way to improve efficiency as well as to focus on properties that are relevant in the search, paring down large vectors. Trying to accurately represent something big with a much smaller key is a common strategy in computing; we saw this when looking at hashing.


Understanding Data Mesh Principles

When an organization embraces a data mesh architecture, it shifts its data usage and outcomes from bureaucracy to business activities. According to Dehghani, four data mesh principles explain this evolution: domain-driven data ownership, data as a product, self-service infrastructure, and federated computational governance. ... The self-service infrastructure as a platform supports the three data mesh principles above: domain-driven data ownership, data as a product, and federated computational governance. Consider this interface an operating system where consumers can access each domain’s APIs. Its infrastructure “codifies and automates governance concerns” across all the domains. According to Dehghani, such a system forms a multiplane data platform, a collection of related cross-functional capabilities, including data policy engines, storage, and computing. Dehghani thinks of the self-service infrastructure as a platform that enables autonomy for multiple domains and is supported by DataOps.



Quote for the day:

"The level of morale is a good barometer of how each of your people is experiencing your leadership." -- Danny Cox