Daily Tech Digest - August 24, 2022

3 reasons cloud computing doesn’t save money

Without cloud spending visibility and insights, you’re basically driving a car without a dashboard. You don’t how fast you’re going or when you’re about to run out of gas. A guessing game turns into a big surprise when cloud spending is way above what everyone initially thought. That sucking sound you hear is the value that you thought cloud computing would bring now leaving the business. Second, there is no discipline or accountability. A lack of cloud cost monitoring means we can’t see what we’re spending. The other side of this coin is a lack of accountability. Even when a business monitors cloud spending, that data is useless if everyone knows there are no penalties. Why should people change their behavior? They need known incentives to conserve cloud computing resources as well as known consequences. Accountability problems can usually be corrected by leadership making some unpopular decisions. Trust me, you’ll either deal with accountability now or wait until later when it becomes much harder to fix.


How attackers use and abuse Microsoft MFA

The legitimate owner of a thusly compromised account is unlikely to spot that the second MFA app has been added. “It is only obvious if one specifically looks for it. If one goes to the M365 security portal, they will see it; but most users never go to that place. It is where you can change your password without being prompted for it, or change an authenticator app. In day-to-day use, people only change passwords when mandated through the prompt, or when they change their phone and want to move their authenticator app,” Mitiga CTO Ofer Maor told Help Net Security. Also, an isolated, random prompt for the second authentication factor triggered by the attacker can easily not be seen or ignored by the legitimate account owner. “They get prompted, but once the attacker authenticates on the other authenticator, that prompt disappears. There is no popup or anything that says ‘this request has been approved by another device’ (or something of that sort) to alert the user of the risk. ... ” Maor noted.


The emergence of the chief automation officer

AI and automation can transform IT and business processes to help improve efficiencies, save costs and enable people — employees — to focus on higher-value work. Two of the most important areas of IT operations in the enterprise are issue avoidance and issue resolution because of the massive impact they have on cost, productivity, and brand reputation. The rapid digital expansion among enterprises has led to an immediate uptick in demand from IT leaders to embrace AIops tools to increase workflow productivity and ensure proactive, continuous application performance. With AIops, IT systems and applications are more reliable, and complex work environments can be managed more proactively, potentially saving hundreds of thousands of dollars. This can enable IT staff to focus on high-value work instead of laborious, time-consuming tasks, and identify potential issues before they become major problems.


How a Service Mesh Simplifies Microservice Observability

According to Jay Livens, observability is the practice to capture the system’s current state based on the metrics and logs it generates. It’s a system that helps us with monitoring the health of our application, generating alerts on failure conditions, and capturing enough information to debug issues whenever they happen. ... A major aspect of observability is capturing network telemetry, and having good network insights can help us solve a lot of the problems we spoke about initially. Normally, the task of generating this telemetry data is up to the developers to implement. This is an extremely tedious and error-prone process that doesn’t really end at telemetry. Developers are also tasked with implementing security features and making communication resilient to failures. Ideally, we want our developers to write application code and nothing else. The complications of microservices networking need to be pushed down to the underlying platform. A better way to achieve this decoupling would be to use a service mesh like Istio, Linkerd, or Consul Connect.


IT talent: 4 interview questions to prep for

Whether managers have a more hands-on approach or allow their direct reports more autonomy, identifying this during the interview process is in the best interest of both parties. Additionally, some candidates thrive in an office, while others are hoping for a completely remote position or even a hybrid option. Discussing and defining preferences and working environments helps clarify candidates’ expectations for their roles. It also benefits hiring managers, prospective employees, and the companies, which can avoid high turnover rates by being transparent in their recruiting phase. ... people generally love to talk about things that make them proud. By asking this question, hiring managers allow candidates to talk about who they are as individuals rather than just what they bring to the larger business. Obviously, pride can encompass past work projects, but some candidates might also cite volunteer contributions, family achievements, or other accomplishments. Overall, candidates should always be prepared to discuss experiences that have contributed to their growth. 


Beyond purpose statements

Many CEOs are starting to sound like politicians, throwing around lofty language that is vague and hard to pin down. And therein lies the problem, or certainly the challenge: to remain credible and trustworthy, leaders need to shift the conversation from fuzzy purpose bromides to more tangible and concrete statements about the impact their companies are having on society. That is not simply a matter of semantics, as there is a world of difference between purpose and impact. It is difficult to challenge a purpose. If a company says its reason for existing in some form or fashion is to try to make the world a better place, how can you pressure-test that claim? If that company is providing goods or services that customers are willing to pay for, and it employs people and pays vendors, then, ipso facto, it is doing something that has a perceived value. As long as it’s not doing anything criminal or unethical, it’s working “to promote the good of the people,” to borrow the language from one organization’s mission statement. But if you are claiming that you are making an impact, then you need proof. And that’s what makes a statement powerful.


Managing Expectations: Explainable A.I. and its Military Implications

AI systems can be purposefully programmed to cause death or destruction, either by the users themselves or through an attack on the system by an adversary. Unintended harm can also result from inevitable margins of error which can exist or occur even after rigorous testing and proofing of the AI system according to applicable guidelines. Indeed, even ‘regular’ operations of deployed AI systems are mired with faults that are only discoverable at the output stage. ... A primary cause for such faults is flawed training datasets and commands, which can result in misrepresentation of critical information as well as unintended biases. Another, and perhaps far more challenging, reason is issues with algorithms within the system which are undetectable and inexplicable to the user. As a result, AI has been known to produce outputs based on spurious correlations and information processing that does not follow the expected rules, similar to what is referred to in psychology as the ‘Clever Hans effect’.


POCs, Scrum, and the Poor Quality of Software Solutions

It is generally accepted that quality is the ‘reliability of a product’. ‘Reliability’ though, as we are used to think of in classical science, is the attribute of consistently getting the same results under the same conditions. In this classical view, building a Quality solution means that we should build a product that never fails. Ironically, understanding reliability this way harms Quality instead of achieving it. Aiming to build a product that never fails can only result in extremely complex systems that are hard to maintain causing Quality to degrade over time. The issue with reliability in this classical sense is the false assumption that we control all conditions, while in fact we don’t (hardware failure, network latency, external service throttling…etc.). We need to extend the meaning of reliability to also accommodate for cases when the conditions are not aligned: Quality is not only a measure of how reliable a software product is when it is up & running, but also a measure of how reliable it is when it fails. 


Critical infrastructure is under attack from hackers. Securing it needs to be a priority - before it's too late

In order to protect networks – and people – from the consequences of attacks, which could be significant, many of the required security measures are among the most commonly recommended and often simplest practices. ... Cybersecurity can become more complex for critical infrastructure, particularly when dealing with older systems, which is why it's vital that those running them know their own network, what's connected to it and who has access. Taking all of this into account, providing access only when necessary can keep networks locked down. In some cases, that might mean ensuring older systems aren't connected to the outside internet at all, but rather on a separate, air-gapped network, preferably offline. It might make some processes more inconvenient to manage, but it's better than the alternative should a network be breached. Incidents like the South Staffordshire Water attack and the Florida water incident show that cyber criminals are targeting critical infrastructure more and more. Action needs to be taken sooner rather than later to prevent potentially disastrous consequences not just for organizations, but for people too.


How to Nurture Talent and Protect Your IT Stars

Anderson adds building out growth and learning opportunities starts with the CTO. “That means ensuring we have learning and training goals identified, which is used as a critical element for annual performance expectations of our IT leaders and managers, not only for themselves, but for their staff,” he says. As Court notes, the company invests internally through the LIFT University with a cadre of continuing education and augmenting with external training. “For career growth, I recommend IT teams have a close reporting or partnership to the engineering and product teams,” Anderson adds. He says the rationale for this is simple -- as employees want to perfect their craft, they need to work for and with people that understand their craft, and push them to continually learn through team, project, and program collaboration. “As we all know, the one constant is that technology is constantly evolving, so continuous learning for employees, especially our IT team, is a must,” he says. SoftServe’s Semenyshyn says that closely monitoring employee burnout is a priority across the IT industry, pointing out the advantage of the IT business in a large global company is the possibility of rotations.



Quote for the day:

"Teamwork is the secret that make common people achieve uncommon result." -- Ifeanyi Enoch Onuoha

Daily Tech Digest - August 23, 2022

Unstructured data storage – on-prem vs cloud vs hybrid

Enterprises have responded to growing storage demands by moving to larger, scale-out NAS systems. The on-premise market here is well served, with suppliers Dell EMC, NetApp, Hitachi, HPE and IBM all offering large-capacity NAS technology with different combinations of cost and performance. Generally, applications that require low latency – media streaming or, more recently, training AI systems – are well served by flash-based NAS hardware from the traditional suppliers. But for very large datasets, and the need to ease movement between on-premise and cloud systems, suppliers are now offering local versions of object storage. The large cloud “superscalers” even offer on-premise, object-based technology so that firms can take advantage of object’s global namespace and data protection features, with the security and performance benefits of local storage. However, as SNIA warns, these systems typically lack interoperability between suppliers. The main benefits of on-premise storage for unstructured data are performance, security, plus compliance and control – firms know their storage architecture, and can manage it in a granular way.


What is CXL, and why should you care?

Eventually CXL it is expected to be an all-encompassing cache-coherent interface for connecting any number of CPUs, memory, process accelerators (notably FPGAs and GPUs), and other peripherals. The CXL 3.0 spec, announced last week at the Flash Memory Summit (FMS), takes that disaggregation even further by allowing other parts of the architecture—processors, storage, networking, and other accelerators—to be pooled and addressed dynamically by multiple hosts and accelerators just like the memory in 2.0. The 3.0 spec also provides for direct peer-to-peer communications over a switch or even across switch fabric, so two GPUs could theoretically talk to one another without using the network or getting the host CPU and memory involved. Kurt Lender, co-chair of the CXL marketing work group and a senior ecosystem manager at Intel, said, “It’s going to be basically everywhere. It’s not just IT guys who are embracing it. Everyone’s embracing it. So this is going to become a standard feature in every new server in the next few years.” So how will the application run in enterprise data centers benefit? 


Technology alone won’t solve your organizational challenges

Whatever your organization’s preference for team building, it should be carefully selected from a range of options, and it should be clear to everyone why the firm chose one particular structure over another and what’s expected of everyone participating. Start with desired outcomes and cultural norms, then articulate principles to empower action, and, finally, provide the skills and tools needed for success. ... Even in the most forward-thinking organizations, people want to know what a meeting is supposed to achieve, what their role is in that meeting, and if gathering people around a table or their screens is the most effective and efficient way to get to the desired outcome. Is there a decision to be made? Or is the purpose information sharing? Have people been given the chance to opt out if the above points are not clear? Asking these questions can serve as a rapid diagnostic for what you are getting right—and wrong—in your meetings. Poorly run meetings sap energy and breed mediocrity.


For developers, too many meetings, too little 'focus' time

That’s not to say that meetings aren’t important, but it makes sense for managers to find the right balance for their teams, said Dan Kador, vice president of engineering at Clockwise. “It's something that companies have to pay attention to and try to understand their meeting culture — what's working and what's not working for them." “It is important that teams get together to discuss things and make sure they are all on the same page, but often meetings are scheduled at regular intervals even if they aren’t necessary,” said Jack Gold Principal analyst & founder at J. Gold Associates. “We are all subjected to weekly meetings, or other intervals, where, even if there is nothing to discuss, the meeting takes place anyway. And some meeting organizers feel obligated to use up the entire scheduled time.” Of course, meeting overload is not just an issue for those writing code. “Too much time spent in meetings is not just a problem for developers,” said Gold. “It is a problem across the board for employees in many companies.”


How To Remain Compliant In The New Era Of Payment Security

To counter the threat of e-commerce skimming, the card companies are using the two tools they have in their arsenal again: by making stolen data worthless and by creating new technical security standards. To make stolen payment card data worthless, there’s a chip-equivalent technology for e-commerce called 3-D-Secure v2, which has already been rolled out in the EU. This technology requires something more than just the knowledge of the numbers printed on a payment card to make an online transaction. After entering their payment card data, the consumer may have to further confirm a purchase using a bank’s smartphone app or by entering a code received by SMS. Alongside this re-engineering of the payment system, the latest version of the Payment Card Industry Data Security Standard (PCI DSS) includes new technical requirements to prevent and detect e-commerce skimming attacks. PCI DSS applies to all entities involved in the payment ecosystem, including retailers, payment processors and financial institutions. Firstly, website operators will need to maintain an inventory of all the scripts included in their website and determine why the script is necessary.


Q&A: How Data Science Fits into the Cloud Spend Equation

The great thing about cloud is you use it when you need it. Obviously, you pay for using it when you need it but often times data science applications, especially ones you’re running over large datasets, aren’t running continuously or don’t need to be structured in a way that they run continuously. Therefore, you’re talking about a very concentrated amount of spend for a very short amount of time. Buying hardware to do that means your hardware sits idle unless you are very active about making sure you’re being very efficient in the utilization of that resource over time. One of the biggest advantages of cloud is that it runs and scales as you need it to. So even a tiny can run a massive computation and run it when they need to and not consistently. That adds challenges, of course. “I fired this thing off on Friday, I come back in on Monday and it’s still running, and I accidentally spent $6,000 this weekend. Oops.” That happens all the time and so much of that is figuring out how to establish guardrails. Sometimes data science gets treated like, “You know, they’re going to do whatever they need to.”


Advantages of open source software compared to paid equivalents

The strength of open source technology is the fact that these products are developed with an iterative approach by a large group of experts. Open source communities are made up of diverse sets of people from across the world. This kind of diversity is beneficial because ideas and issues get vetted in multiple ways. From an enterprise perspective, open source software is a safe investment because you know there is a dedicated community with product experience. Many developers aren’t working for money, and are easy to approach and ask for help. You can raise questions or concerns directly with developers, or opt to obtain a paid support plan through the community for highly technical inquiries. ... Of course, since open source products are designed for a large audience, sometimes they won’t be able to perfectly fit a company’s needs. Fortunately, the open source approach encourages customisation and integration, meaning your own internally teams can start with an open source baseline and tweak it. Improvements can also be fed back into the open source development cycle.


3 steps for CIOs to build a sustainable business

Data is key. To establish a baseline, the CIO must measure the impact of the enterprise’s full technology stack, including outside partners and providers. This requires asking for, extracting, and reconciling data across external parties – and remembering to aggregate more than just decarbonization data. Cloud and sourcing choices and the disposition of assets after a cloud migration contribute to the carbon footprint. The CIO must also guide employees to make good sustainability choices. One example: according to Cisco, there are 27.1 billion devices connected to the internet – that’s more than three devices for every person on the planet. Many enterprise employees carry two mobile phones but don’t need to – existing technology enables them to segment two different environments on one device. Also, organizations with service contacts can reject hardware refreshes from a contract, empowering employees to decide if they need a new device or just a new battery.


Architecture and Governance in a Hybrid Work Environment

Architects can’t architect if they don’t speak to other people. Likewise governance isn’t effective if you are talking best practice to yourself alone in a dark room someplace. Getting this right in normal times isn’t always easy. People have meetings, they are working hard and don’t want to be disturbed, they need their coffee from the corporate cafeteria or the Starbucks down the street, they’re at lunch or they’re leaving at 430 to get to their kid’s baseball game. In short, it isn’t always possible in normal times to round people up and have a day-long whiteboard session on architecture. With hybrid working models, it is even more difficult because we can’t simply walk over to the cube next to us and have a conversation. In fact, most of the time we have no idea where people actually are or what they’re doing. We rely on text, chat, Teams, Outlook and other tools to give us a sense of whether someone has 5 minutes to chat. If you want a 3 hour whiteboard session, that involves a high degree of coordination with people’s calendars in Outlook. Even then people always seem to have ‘hard stops’ at times that are really incompatible with thinking and design sessions.


Karma Calling: LockBit Disrupted After Leaking Entrust Files

Given the damage and disruption being caused by LockBit and other ransomware groups, one obvious question is why these gangs aren't being disrupted with greater frequency, says Allan Liska, principal intelligence analyst at Recorded Future. "We all know these sites are MacGyvered together with bailing wire and toothpicks and are rickety as hell. We should do stuff like this to impose cost on them," Liska says. Some members of the information security community prefer stronger measures, of the "Aliens" protagonist Ripley variety. "I always say: go kinetic and solve the problem permanently," says Ian Thornton-Trump, CISO of Cyjax. "Attribution is for the lawyers. I recommend a strike from orbit, it's the only way to be sure," he says. Another explanation for the attack would be one or more governments opting to "impose costs" on the ransomware gang, say Brett Callow, a threat analyst at Emsisoft. As he notes, the imposing-costs phrase is a direct quote from Gen. Paul M. Nakasone, the head of Cyber Command, who last year told The New York Times that the military has been tasked with not just helping law enforcement track ransomware groups, but also to disrupt them.



Quote for the day:

"The manager has a short-range view; the leader has a long-range perspective." -- Warren G. Bennis

Daily Tech Digest - August 22, 2022

Law Firm Cyber Risk: The 5 Ways Cybercriminals Most Likely Will Attack Your Computers — And 7 Things You Can Do

It’s always better to deal with security risks early on while they’re still small rather than later when they turn huge and cause massive woe. Indeed, a Voke Media survey found that 80% of companies hit by a data breach said they could have prevented it had they only hardened their systems by installing updates and security patches in a timely way. That’s something you too need to be doing, but if you don’t have IT staff trained to monitor, maintain and patch your computers, you will find it advantageous to entrust those tasks to a reputable outside service. This will save you time and greatly reduce the potential for installation errors (those that cause data losses, file corruption or even system crashes). ... Backing up safeguards your critical data against human error, illegitimate deletion, programmatic errors, malicious insiders, malware and hackers. Cloud-to-cloud SaaS backup is ideal — especially if it’s fully automated, HIPAA compliant, running nonstop in the background and employing multiple layers of operational and physical security.


The rise of the data lakehouse: A new era of data value

Gartner’s Ronthal sees the evolution of the data lake to the data lakehouse as an inexorable trend. “We are moving in the direction where the data lakehouse becomes a best practice, but everyone is moving at a different speed,” Ronthal says. “In most cases, the lake was not capable of delivering production needs.” Despite the eagerness of data lakehouse vendors to subsume the data warehouse into their offerings, Gartner predicts the warehouse will endure. “Analytics query accelerators are unlikely to replace the data warehouse, but they can make the data lake significantly more valuable by enabling performance that meets requirements for both business and technical staff,” concludes its report on the query accelerator market. ... “We do see the future of warehouses and lakes coming into a lakehouse, where one system is good enough,” Yuhanna says. For organizations with distributed warehouses and lakes, the mesh architecture such as that of Starburst will fill a need, according to Yuhanna, because it enables organizations to implement federated governance across various data locations.


Devs don’t want to do ops

“The intention is not to put the burden on the developer, it is to empower developers with the right information at the right time,” Harness’s Durkin said. “They don’t want to configure everything, but they do want the information from those systems at the right time to allow operations and security and infrastructure teams to work appropriately. Devs shouldn’t care unless something breaks.” Nigel Simpson, ex-director of enterprise technology strategy at the Walt Disney Company, wants to see companies “recognize this problem and to work to get developers out of the business of worrying about how the machinery works—and back to building software, which is what they’re best at.” ... “Developer control over infrastructure isn’t an all-or-nothing proposition,” Gartner analyst Lydia Leong wrote. “Responsibility can be divided across the application lifecycle, so that you can get benefits from ‘you build it, you run it’ without necessarily parachuting your developers into an untamed and unknown wilderness and wishing them luck in surviving because it’s ‘not an infrastructure and operations team problem’ anymore.”


Defense-in-depth: a proven strategy to protect industrial assets

The first step to any effective OT-security program is building alignment between executives, business leaders, IT and operations. Start by bringing key stakeholders together to establish a clear understanding of business line requirements and critical-system interdependencies. You’ll need frequent and clear communication between OT, IT and engineering. ... Implement an IT/OT segmentation strategy. An IT/OT segmentation strategy separates ICS networks from enterprise networks to prevent bad actors from entering enterprise networks to access ICS devices. This segmentation model can integrate with an IT/OT integration demarcation zone (DMZ) for management tools, security tools and jump hosts, and can establish security zones to ensure devices are logically isolated to allow only required communications. ... Use multi-factor authentication. While most ICS devices can’t support the implementation of multi-factor authentication (MFA), this can still be a viable tool. A jump host that requires MFA can help prevent unauthorized access and direct connections from a lower-security network into a higher one.


How IoT and Metaverse Will Complement Each Other?

IoT devices often have a simple interface and interact with real-world devices. But standard IoT devices with screens may employ Metaverse to offer a 3D digital user experience. As a result, using IoT devices will give users a more immersive experience. The ability to stay present in real and virtual worlds will be available. As a result, companies can hire an IoT app developer to greatly customize the user interface and experience. As said above, the Metaverse will feel more akin to the physical world when IoT is used. More interaction between people and IoT devices and the intricate environment and processes of the Metaverse will be possible. We will be able to make better decisions with less learning and training, thanks to the immersive nature of the Metaverse and the real-world use cases. Effective for Long-Term Planning The amount of digital content derived from real-world objects, such as structures, people, cars, clothing, etc., constantly expands in the Metaverse. As a result, businesses aim to replicate our physical world exactly in cyberspace. 


Risk Transfer Is The Key To Successful AI

The most significant challenge, as it pertains to AI, that businesses face is inventing new workflows to leverage AI in existing or new business models, allowing them to significantly grow their market share within existing or new areas. New AI tools and technologies become disastrous distractions from business value. Instead, the business should focus on meaningful transfers of risk. The business will be able to add more customers and demand more for their services when they help customers reduce their own risk. The business’s AI solution then needs a clear transfer of risk itself. Without the AI solution, an expert within the business would be manually providing the service, but with the AI, the expert is more able to deliver the service at greater quality and/or greater scale. Another former colleague of mine at General Electric Global Research, Jim Bray, told me a long time ago that a large part of his value to the company was helping reduce risk around complex engineering and science. A significant contribution that AI scientists make for industrial businesses is in assessing risk and the likelihood of project success.


AI Song Contest: The Eurovision spin-off where music is written by machines

A good AI-generated song is the result of the hard work of entire teams of scientists and musicians who often struggle for months before reaching the desired tunes, making up algorithms and feeding ideas to the machine. The Galician team PAMP! - who came second at this year’s contest to Thailand’s song Enter Demons & Gods - took four months to create its track AI-Lalelo, a song which pays tribute to Galician women keeping the language, traditions and culture of the Spanish region alive. They started by getting the AI programme - an autoregressive language model called GPT-3 which uses deep learning to produce human-like text - to learn to speak Galician, a minority language estimated to be spoken by some 2.4 million people in northwestern Spain. “AI tools work in state languages, not in minority languages,” Joel Cava, Coordinator of the PAMP! Team and Creative Manager of CECUBO Group, told Euronews Next. “For the lyrics, we had to develop a corpus in Galician so that the machine (GPT-3) would learn to speak in our mother tongue”.


How Good Is Your Code Review Process?

An effective code review process starts with alignment on its objective. As a team, it’s important to determine which outcomes your review process is optimizing for. Is it catching bugs and defects, improving the maintainability of the codebase or increasing stylistic consistency? Maybe it’s less about the code and more about increasing knowledge sharing throughout the team? Determining priorities helps your team focus on what kind of feedback to leave or look for. Reviews that are intended to familiarize the reviewer with a particular portion of the codebase will look different from reviews that are guiding a new team member toward better overall coding practices. Once you know what an effective code review means for your team, you can start adjusting your code review activities to achieve those goals. The metrics indicating a healthy code review process differ right from the goals, but with that caveat, there are a few trends every team lead should monitor. Regularly reporting Time to First Review, Review Coverage, Review Influence and Review Cycles metrics will allow you to quickly diagnose and address problems with your code review process.


Security is hard and won’t get much easier

One major reason security is hard is it’s hard to secure a system without understanding the system in its entirety. As open source luminary Simon Willison posits, “Writing secure software requires deep knowledge of how everything works.” Without that fundamental understanding, he continues, developers may follow so-called “best practices” without understanding why they are such, which “is a recipe for accidentally making mistakes that introduce new security holes.” One common rejoinder is that we can automate human error out of development. Simply enforce secure defaults and security issues go away, right? Nope. “I don’t think the tools can save us,” Willison argues. Why? Because “no matter how good the default tooling is, if engineers don’t understand how it keeps them secure they’ll subvert it—without even meaning to or understanding why what they are doing is bad.” Additionally, no matter how good the tool, if it doesn’t fit seamlessly into security-minded processes, it will never be enough.


CIO Kristie Grinnell on creating a culture of transformation

One thing that we do is have people think about it as if this were your own business. Is this the decision that you would make? If you have one dollar, would you spend it on this technology? We need to recognize that we have that role, that power in IT. We should all be thinking that this is our ability to grow the business. Where am I going to put that dollar to get the most bang for my buck? I’m not just over here in IT and have to deliver to my budget. If I can give some of that back to go invest in something else, and it’s going make us grow, look what value IT just added. Or I might need to invest it in IT because that’s going to give us a new capability that helps us grow in a different way. So really thinking about, how do I run IT as a business and how do I think about that return on investment of every single dollar we spend is important.



Quote for the day:

"In simplest terms, a leader is one who knows where he wants to go, and gets up, and goes." -- John Erksine

Daily Tech Digest - August 21, 2022

Using AI to Automate, Orchestrate, and Accelerate Fraud Prevention

Traditional approaches to fraud prevention and response no longer measure up. First of all, they’re reactive, rather than proactive, focused on damage that’s already taken place rather than anticipating, and potentially preventing, the threats of the future. The limitations of this approach play out in commercial off-the-shelf tools that organizations can’t easily modify to new developments in the landscape. Even the most cutting-edge AI solutions may be limited in detecting new types of fraud schemes, having only been trained on known categories. Secondly, today’s siloed operations impede progress. Cybersecurity teams and fraud teams, the two groups on the frontlines of the fight, too often work with different tools, workflows, and intelligence sources. These silos extend across the various stages of the fraud-fighting lifecycle: threat hunting, monitoring, analysis, investigation, response, and more. Individual tools address only discrete parts of the process, rather than the full continuum, leaving much to fall within the gaps. When one team notices something suspicious, the full organization might not know about the threat and act upon it until it’s too late.


Fundamentals of AI Ethics

One of the biggest challenges in AI, bias can stem from several sources. The data used for training AI models might reflect real societal inequalities, or the AI developers themselves might have conscious or unconscious feelings about gender, race, age, and more that can wind up in ML algorithms. Discriminatory decisions can ensue, such as when Amazon’s recruiting software penalized applications that included the word “women,” or when a health care risk prediction algorithm exhibited a racial bias that affected 200 million hospital patients. To combat AI bias, AI-powered enterprises are incorporating bias-detecting features into AI programming, investing in bias research, and making efforts to ensure that the training data used for AI and the teams that develop it are diverse. Gartner predicts that by 2023, “all personnel hired for AI development and training work will have to demonstrate expertise in responsible AI.” Continually monitoring, analyzing, and improving ML algorithms using a human-in-the-loop (HITL) approach – where humans and machines work together, rather than separately – can also help reduce AI bias. 


10 nonfunctional requirements to consider in your enterprise architecture

Scalability refers to the systems' ability to perform and operate as the number of users or requests increases. It is achievable with horizontal or vertical scaling of the machine or attaching AutoScalingGroup capabilities. Here are three areas to consider when architecting scalability into your system:Traffic pattern: Understand the system's traffic pattern. It's not cost-efficient to spawn as many machines as possible due to underutilization. Here are three sample patterns:Diurnal: Traffic increases in the morning and decreases in the evening for a particular region. Global/regional: Heavy usage of the application in a particular region. Thundering herd: Many users request resources, but only a few machines are available to serve the burst of traffic. This could occur during peak times or in densely populated areas. Elasticity: This relates to the ability to quickly spawn a few machines to handle the burst of traffic and gracefully shrink when the demand is reduced. Latency: This is the system's ability to serve a request as quickly as possible. 

When we might meet the first intelligent machines

A few weeks later, Yann LeCun, the chief scientist at Meta’s artificial intelligence (AI) Lab and winner of the 2018 Turing Award, released a paper titled “A Path Towards Autonomous Machine Intelligence.” He shares in the paper an architecture that goes beyond consciousness and sentience to propose a pathway to programming an AI with the ability to reason and plan like humans. Researchers call this artificial general intelligence or AGI. I think we will come to regard LeCun’s paper with the same reverence that we reserve today for Alan Turing’s 1936 paper that described the architecture for the modern digital computer. Here’s why. ... LeCun’s first breakthrough is in imagining a way past the limitations of today’s specialized AIs with his concept of a “world model.” This is made possible in part by the invention of a hierarchical architecture for predictive models that learn to represent the world at multiple levels of abstraction and over multiple time scales. With this world model, we can predict possible future states by simulating action sequences. In the paper, he notes, “This may enable reasoning by analogy, by applying the model configured for one situation to another situation.”


Why DevOps Governance is Crucial to Enable Developer Velocity

One key takeaway from all this: consolidation of application descriptors enables efficiencies via modularization and reuse of tested and proven elements. This way the DevOps team can respond quickly to the dev team needs in a way that is scalable and repeatable. Some potential anti-patterns include: Developers are throwing their application environment change needs over the fence via the ticketing system to the DevOps team causing the relationship to worsen. Leaders should implement safeguards to detect this scenario in advance and then consider the appropriate response. An infrastructure control plane, in many cases, can provide the capabilities to discover and subsume the underlying IaC files and detect any code drift between the environments. Automating this process can alleviate much of the friction between developers and DevOps teams. Developers are taking things into their own hands resulting in an increased number of changes in local IaC files and an associated loss of control. Mistakes happen, things stop working, and finger pointing ensues. 


The Role of ML and AI in DevOps Transformation

DevOps is changing fundamentally as a result of AI and ML. Change in security is most notable because it acknowledges the need for complete protection that is intelligent by design (DevSecOps). Many of us believe that shortening the software development life cycle is the next critical step in the process of ensuring the secure delivery of integrated systems via Continuous Integration & Continuous Delivery (CI/CD). DevOps is a business-driven method for delivering software, and AI is the technology that may be integrated into the system for improved functioning; they are mutually dependent. With AI, DevOps teams can test, code, release, and monitor software more effectively. Additionally, AI can enhance automation, swiftly locate and fix problems, and enhance teamwork. AI has the potential to increase DevOps productivity significantly. It can improve performance by facilitating rapid development and operation cycles and providing an engaging user experience for these features. Machine Learning technologies can make data collection from multiple DevOps system components simpler.


Data Lakes Are Dead: Evolving Your Company’s Data Architecture

Changing your data architecture starts with recognizing that the process spans beyond IT – it’s a company-wide shift. Data literacy and culture are fundamental components of launching or changing data architecture. This shift begins with defining your business goals and value chain. What business problem do you want to solve, and how can your data be optimized to accomplish that goal? Different data architecture offers diverse possibilities for conducting analytics, none of which are inherently better than another. Having a company-wide understanding of where you are and where you’re going helps guide what you should be getting out of your data and what architecture would best serve those needs at each level of your organization. Once you’ve identified how to manage your data better to serve your organization, you need to establish overarching data governance. Again, data governance is not a set of procedures for IT, but a company-wide culture. An impactful data culture involves a carefully curated ecosystem of roles, responsibilities, tools, systems, and procedures. 


7 benefits of using design review in your agile architecture practices

The things involved in a design review include:The designer is the person who wants to solve a problem. The documentation is the document at the center of attention. It contains information regarding all aspects of the problem and the proposed solution. The reviewer is the person who will review the documentation. The process includes the agreed-upon rules and interactions that define the designer's and reviewer's communications. It may stand alone or be part of a bigger process. For example, in a software development life cycle, it could precede development, or in an API specification, it could include evaluating changes. The review scope is the area the reviewer tries to cover when reviewing the documentation (technical or not). ... Design review has clear value that far outweighs the overhead it introduces, much like code review does in software releases. Organizations should consider it part of their governance model in conjunction with other tools and practices, including architecture review boards. 


Enterprise Architecture Governance – Why It Is Important

The Enterprise Architecture organization helps to develop and enable the adoption of design, review, execution and governance capabilities around EA. EA guidance and governance over the Enterprise IT solutions delivery processes focused on realizing a number of solutions characteristics. These include:Standardization: Development and promotion of enterprise-wide IT standards. Consistency: Enable required levels of information, process and applications integration and interoperability. Reuse: Strategies and enabling capabilities that enable reuse and advantage of IT assets at the design, implementation and portfolio levels. This could include both process/governance and asset repository considerations. Quality: Delivering solutions that meet business functional and technical requirements, with a lifecycle management process that ensure solutions quality. Cost-effectiveness and efficiency: Enabling consistent advantage of standards, reuse and quality through repeatable decision governance processes enabling reduced levels of total solutions lifecycle cost, and enabling better realization on IT investments.


How Blockchain Checks Financial Frauds within Companies

Blockchains are made to be resistant to data modification by design. A blockchain can effectively function as an open, distributed ledger that can efficiently and permanently record transactions between two parties. Blockchain can also be used to verify transactions that have been reported. Using the technology, auditors could simply confirm the transactions on readily accessible blockchain ledgers rather than requesting bank statements from clients or contacting third parties for confirmation. The blockchain technology achieves this immutability by matching cryptography with blockchain. Each transaction that the blockchain network deems valid is time-stamped, embedded into a ‘block’ of data, and cryptographically secured by a hashing operation that links to and integrates the hash of the previous block. This new transaction then joins the chain as the following chronological update. Meta-data from the hash output of the previous block is always incorporated into the hashing process of a new block. 



Quote for the day:

"Leaders make decisions that create the future they desire." -- Mike Murdock

Daily Tech Digest - August 20, 2022

AI & Synthetic Data's Analysis Of Human Movement

One of the special applications of AI poses estimation, a computer vision approach that aids in determining the position and orientation of the human body from an image of a person. It can be utilized, for instance, in markerless motion capture, worker position analysis, and avatar animation for virtual reality. It is required to take numerous pictures of the human actor and its surrounding environment to properly analyze posture. The joints of the human actor are then identified in these photos using a trained convolutional neural network. AI-based fitness apps typically take advantage of the camera on the device to record films up to 720p and 60fps to capture more frames while an exercise is being performed. The issue is that when utilizing a method like a posture estimation, computer vision experts require enormous volumes of visual data to train AI for fitness assessments. Data involving humans engaging in many types of exercise and interacting with several items is quite complicated. To prevent bias, the data must also have high variance and be sufficiently broad.


Why Vulnerability May Be a Leader's Greatest Strength

As leaders, we owe it to our teams to admit when we make a mistake, but it takes vulnerability to admit that we can be wrong. For example, imagine someone recommended a change that I turned it down but later recognized as the right move. There is value in providing an explanation of what made me go in that direction, but ultimately, I need to take responsibility for being wrong. People respect it when others, especially those in leadership, demonstrate the vulnerability it takes to acknowledge they, too, are only human. Leadership vulnerability drives the courage to innovate and trust among team members, with benefits that ripple into their engagement, satisfaction and retention. Mistakes happen, but a leader who pretends to be perfect and expects perfection ends up with a team too frightened to come clean about their mistakes. They either avoid admitting when they make them or avoid the risk of making them altogether, holding back creativity, innovation and new ideas. 


Google Patches Chrome’s Fifth Zero-Day of the Year

“Publicizing details on an actively exploited zero-day vulnerability just as a patch becomes available could have dire consequences, because it takes time to roll out security updates to vulnerable systems and attackers are champing at the bit to exploit these types of flaws,” observed Satnam Narang, senior staff research engineer at cybersecurity firm Tenable, in an email to Threatpost. Holding back info is also sound given that other Linux distributions and browsers, such as Microsoft Edge, also include code based on Google’s Chromium Project. These all could be affected if an exploit for a vulnerability is released, he said. “It is extremely valuable for defenders to have that buffer,” Narang added. While the majority of the fixes in the update are for vulnerabilities rated as high or medium risk, Google did patch a critical bug tracked as CVE-2022-2852, a use-after-free issue in FedCM reported by Sergei Glazunov of Google Project Zero on Aug. 8. FedCM—short for the Federated Credential Management API–provides a use-case-specific abstraction for federated identity flows on the web, according to Google.


CyberArk Channel Chief: Huge Amount Of Momentum Around SaaS

“We have a huge amount of momentum with our partners around SaaS,” Moore said in an interview with CRN, a week after CyberArk announced impressive second-quarter general revenues and subscription revenues tied to its new products and SasS strategies. CyberArk, with headquarters in Newton, Mass. and Petach Tikva, Israel, is now about halfway through its 36-month-long global channel transformation that includes a new emphasis on SaaS and subscriptions, said Moore, who joined CyberArk two years ago as its senior vice president of global channels. “Our channel partners love SaaS and love subscriptions, for all the reasons we love SaaS and subscriptions,” he said. ... In particular, he said he likes the fact that CyberArk is now providing earlier access to its new technologies and resources, giving his firm more time to convince customers about the pluses of CyberArk’s offerings. “It’s been nothing but positive,” he said of Optiv’s partnership with CyberArk.


Data Science Vs. Machine Learning: What’s The Difference?

Machine learning is a subset of data science that applies algorithms to make predictions about future events from data. Data scientists use machine learning to find patterns in data, make predictions, and improve the accuracy of future predictions. Data science is a broader field that includes techniques like predictive modeling, feature engineering, and data analysis. It involves understanding how data can be used to improve business outcomes. Data scientists use machine learning to analyze and understand data sets, making predictions about the relationships between variables. Some key differences between the two fields include: Machine learning is a probabilistic approach that uses algorithms to learn from data; Data science is focused on understanding and extracting knowledge from data. Machine learning is focused on making automated decisions using data; Machine learning is often used to solve problems where there is a lot of historical data, while data science is used more for situations where there is not as much historical data; and Data scientists often profoundly understand the problem they are trying to solve and use that understanding to develop machine learning models.


How SaaS transforms software development

SaaS applications end the fear of delivering an unknown, showstopper bug to customers, without any way to fix it for weeks or months. The days of delivering a patch to an installed product have gone by the wayside. Instead, if a catastrophic bug does wend its way through the development pipeline and into production, you can know about it as soon as it strikes. You can take immediate action—roll back to a known good state or flip off a feature flag—practically before any of your customers even notice. Often, you can fix the bug and deploy the fix in a matter of minutes instead of months. And it’s not just bugs. You no longer have to hold new features as “inventory,” waiting for the next major release. It used to be that if you built a new feature in the first few weeks after a major release, that feature would have to wait potentially months before being made available to customers. Now, a SaaS application can deliver a new feature immediately to customers whenever the team says it is ready.


Welcome To 2032: A Merged Physical/Digital World

We are starting to evolve beyond classical computing into a new data era called quantum computing. It is envisioned that quantum computing will accelerate us into the future by impacting the landscape of artificial intelligence and data analytics. The quantum computing power and speed will help us solve some of the biggest and most complex challenges we face as humans. ... Science is already making great advances in brain/computer interface. This may include neuromorphic chips and brain mapping. Brain-computer interfaces are formed via emerging assistive devices that have implantable sensors that record electrical signals in the brain and use those signals to drive external devices. Eventually these nano-chips may be implanted into our brains, artificially augmenting human thought and reasoning capabilities, and we may be able to upload intelligent data and cognitive resources to our brains by 2032. ... The areas of health and medicine will witness a profound growth of technological innovation by 2032. Numerous breakthroughs in genomics anti-aging therapies will extend our longevity and quality of life.


Patch Now: 2 Apple Zero-Days Exploited in Wild

Security researchers are urging users of Apple Mac, iPhone, and iPad devices to immediately update to newly released versions of the operating systems for each technology, to mitigate risk from two critical vulnerabilities in them that attackers are actively exploiting. The zero-day flaws allow threat actors to take complete control of affected devices. They impact users of iPhone 6s and later, all models of iPad Pro, iPod touch (7th generation), iPad Ai2 and later, iPad 5th generation and later, and iPad mini 4 and later. Also affected are users with Macs running macOS Monterey, macOS Big Sur, and macOS Catalina. Apple disclosed the vulnerabilities and the updates addressing them on Wednesday. One of the zero-days (CVE-2022-32893) exists in WebKit, Apple's browser engine for Safari and for all iOS and iPadOS Web browsers. Apple described the flaw as tied to an out-of-bounds write issue that attackers could use to remotely take control of vulnerable devices.


A critical review of the EU’s ‘Ethics Guidelines for Trustworthy AI’

The majority of AI models in production today are “black box” systems that, by the very nature of their architecture, produce outputs using far too many steps of abstraction, deduction, or conflation for a human to parse. In other words, a given AI system might use billions of different parameters to produce an output. In order to understand why it produced that particular outcome instead of a different one, we’d have to review each of those parameters step-by-step so that we could come to the exact same conclusion as the machine. A solution: the EU should adopt a strict policy preventing the deployment of opaque or black box artificial intelligence systems that produce outputs that could affect human outcomes unless a designated human authority can be held fully accountable for unintended negative outcomes. ... There’s currently no political consensus as to who’s responsible when AI goes wrong. If the EU’s airport facial recognition systems, for example, mistakenly identify a passenger and the resulting inquiry causes them financial harm or unnecessary mental anguish, there’s nobody who can be held responsible for the mistake.


Chipping Away at the Monolith: Applying MVPs and MVAs to Legacy Applications

Organizations are sometimes tempted to do extra technical work, to modernize, or reduce their technical debt because, as they may rationalize, "we’re going to be working on that part of the application anyway, so we should clean things up while we are there." While well-intentioned, this is almost always a bad decision that results in unnecessary cost and delay because once started, it’s very hard to decide to stop. This is where the concept of the MVA pays dividends: it gives everyone a way to decide what changes must be made, and which changes should not be made, at least not yet. If a change is necessary to deliver the desired customer outcome for a release, then it’s part of the MVA, otherwise, it’s out. Sometimes, a team may look at the changes needed to an application and decide, considering the state of the code, that a complete rewrite is in order. The MVA concept, applied to legacy applications, helps to temper that by questioning whether the changes are really necessary to produce the incremental improvements in customer outcomes that are desired.



Quote for the day:

"The art of communication is the language of leadership." -- James Humes

Daily Tech Digest - August 19, 2022

As businesses embrace fully-remote work, does company culture suffer?

Companies that still want to move to a fully remote workplace should consider taking specific actions before doing so, according to Frana. Organizations should:Find out how your staff feels about remote work. Send out a survey to see which employees would want to work from home. Based on those results, you can determine the level of flexibility your company might want to offer. Make sure management is on board. One of the top factors in a remote work policy’s success is how managers feel about it. Explain the benefits of remote work, such as significant savings, the ability to attract and retain top talent from anywhere in the world, and increased productivity. Be intentional about company culture. One of the biggest challenges faced by remote teams is maintaining a strong company culture. In addition to thoughtfully evaluating your current workforce and deciphering what an effective remote-friendly business model looks like, it’s imperative company leaders and managers act with intention and prioritize culture.


Creating A Culture Of Cybersecurity

Businesses need to help their employees learn how to do things differently and train them to think of security as a business priority. Researchers have found that our working memory capacity is between three and five ‘chunks’ of information. This number starts to decline in our 30s, so a safe working figure is probably four chunks of information that the majority of your employees are able to keep in their short-term memory at any point. What does this mean for security? Basically, we need to keep things simple and easy to remember. Factsheets and training days may have their place, but on their own they’re not enough. Consider instead a strategy that uses a combination of continual awareness testing and roleplaying worse-case scenarios, to make security something that’s embedded as a mindset. ... CoEs act as sparring partners, allowing businesses to test solutions and assumptions around products, services and solutions. CoPs take this work to a larger audience, allowing employees to form communities to keep them up to date on the latest threats and remind them about their responsibility in keeping the network safe.


How Not to Waste Money on Cybersecurity

A common way enterprises waste money on IT security is by configuring their security plans and budgets based on the latest cybersecurity trends and following what other organizations are doing. “Each organization's security needs will differ based on their line of business, culture, people, policies, and goals,” says Ahmad Zoua, director of network IT and infrastructure at Guidepost Solutions, a security, investigations, and compliance firm. “What could be an essential security measure to one organization may have little value to another.” Poor planning and coordination can lead to needless duplication and redundancy. “In large organizations, we frequently see many products and platforms that have the same or similar capabilities,” says Doug Saylors, cybersecurity co-leader for technology research and advisory firm ISG. “This is typically the result of a lack of a cohesive cybersecurity strategy across IT functions and a disconnect with the business.” Organizations often layer security products on top of each other year after year.


An Experiment Showed that the Military Must Change Its Cybersecurity Approach

Weis says the Pentagon needs to measure its networks’ suitability for combat the same way it does for soldiers, sailors, tanks, and ships: through the concept of military readiness. Such an approach would mean prioritizing the biggest problems first, with second-tier or complicated ones set on slower paths to fixing. “There's 'ready to fight tonight.' But if you are a carrier strike group and you're deploying in three months, are you on a path to being ready? You manage your readiness on a day-to-day basis and it's a function of a whole bunch of things,” he said. “Do we have the right people? Are they trained? Are they qualified, or deficient? Do we have the equipment?” But Weis had to show that getting to a state of “readiness” in cyberspace is a matter of constant testing and drilling, not filling out compliance forms. He needed a safe space where he could understand readiness without exposing huge problems to adversaries or taking essential naval networks offline. He went to the Naval Postgraduate School, or NPS, in Monterey, California.


Bumpers in the bowling alley: the value of effective data management

According to John Peluso, chief product officer at AvePoint, a layered approach to security is an important way for businesses to achieve this goal. “The most direct thing that we have seen customers find value in – especially in the case of a malware event like ransomware – is the ability to access data,” he says. “The way to achieve this is by having a reliable business continuity strategy. “This becomes more difficult when you consider the data that is stored on someone else’s architecture – such as server content, cloud services, or anything with a synchronisation capability – is less covered by traditional enterprise data protection strategies. That’s new territory. While many businesses may think that because they have outsourced the architecture, they've also outsourced the responsibility, in some cases they haven’t. Businesses are becoming increasingly reliant on cloud services, so they need to be factored into the overall business continuity and resilience strategy.” This reliance on cloud services has, in some ways, been driven by the swift move to hybrid and remote working.


Feds Urge Healthcare Entities to Address Cloud Security

Most major healthcare organizations have become increasing dependent on cloud-based services, says John Houston, vice president of privacy and information security and associate counsel of integrated healthcare delivery organizations at the University of Pittsburgh Medical Center, which includes 40 hospitals and 800 outpatient sites. This reliance is in large part due to many IT vendors moving their services "exclusively to the cloud," he tells Information Security Media Group. "As such, ensuring the security and availability of cloud-based services - and associated information - is and will remain one of UPMC's top priorities. "Unfortunately, such assurance can be problematic for a variety of reasons, most notably being able to accurately assess the cloud vendor’s security posture. Further, getting meaningful contractual commitments is difficult - including financial coverage in the event of a breach," Houston says. Benjamin Denkers, chief innovation officer at privacy and security consulting firm CynergisTek, says he also thinks the biggest threat involving cloud is when organizations are reliant on the third parties and assume the environment is properly secured.


WebOps: A DevOps for Websites, but the Tools Let It Down

From an IT perspective, how is WebOps usually managed? According to Koenig, it depends on what the relationship is between the IT and marketing departments. In some cases, he said, the marketing department “earmarks budget to pay for developers who are technically in IT, but are dedicated to Marketing’s technology needs.” But in other cases, he’s seen “really strong central IT organizations” in which IT takes the lead — and in those cases, they tend to make use of their existing DevOps team and practices. In DevOps, CI/CD is a common part of the workflow. I asked if that’s the case with WebOps too, and if so how does CI/CD work in the web context? For static sites, Koenig replied, testing is done during the build (typically after content is updated). “The more challenging case is where people have content management,” he said, “so you have a living piece of software that’s running your live website, and that is connected to a database, it’s got some binary assets, images, PDFs, what have you. So you have people using that in production to post new content [but] you also want to be able to make design changes and add functionality.”


Why Are Robots So Important To Farmers?

Robots have revolutionized agriculture in recent years by increasing crop yields, decreasing labor costs, and simplifying the process of harvesting crops. The widespread use of robots in farming can be attributed to their ability to perform tasks that are either difficult or impossible for humans to do, such as moving around in tight spaces or reaching high up into plants. As a result of their increased efficiency and versatility, robots have become an essential part of modern agriculture. They are used to plant, harvest, package, and transport crops. They can also detect and avoid obstacles while performing tasks, significantly reducing the chances of human injury or equipment failure. In addition, robots are often equipped with sensors that allow them to gather information about crops and environmental conditions to optimize operations. Many plants are also resistant to insect damage or diseases, so robots may be used to control the insects or pathogens that often affect crops. Robots are also used in areas where humans cannot or would not wish to work, such as space exploration and deep-sea operations.


Five ways augmented analytics is protecting business revenue

Making sure the right person has the right information, at the right time, can be critical to a business. Suppose, for example, there’s an error in your app that prevents users in a particular country from logging in. Initially it may be just a drop in the ocean in terms of the company’s customer base, but over time it could result in user churn and a loss in revenue. Augmented analytics is able to identify such a problem early on from a minimal number of failed attempts and immediately flag it for the person who can fix it. This avoids lag time and sending messages to the wrong department, which are often overlooked by someone who misses its significance. Augmented analytics means potential revenue leaks can be plugged fast, and that means losses can be minimised. ... Keeping a customer satisfied is never easy. Human behaviour is hard enough to predict at the best of times. But augmented analytics can transform the way companies find and fix issues that are turning customers off. The technology identifies “hidden” trends, patterns and anomalies and alerts organisations faster than those anomalies would otherwise appear on traditional dashboards.


How Google Cloud blocked the largest Layer 7 DDoS attack at 46 million rps

The attack was stopped at the edge of Google’s network, with the malicious requests blocked upstream from the customer’s application. Before the attack started, the customer had already configured Adaptive Protection in their relevant Cloud Armor security policy to learn and establish a baseline model of the normal traffic patterns for their service. As a result, Adaptive Protection was able to detect the DDoS attack early in its life cycle, analyze its incoming traffic, and generate an alert with a recommended protective rule–all before the attack ramped up. The customer acted on the alert by deploying the recommended rule leveraging Cloud Armor’s recently launched rate limiting capability to throttle the attack traffic. They chose the ‘throttle’ action over a ‘deny’ action in order to reduce chance of impact on legitimate traffic while severely limiting the attack capability by dropping most of the attack volume at Google’s network edge. Before deploying the rule in enforcement mode, it was first deployed in preview mode, which enabled the customer to validate that only the unwelcome traffic would be denied while legitimate users could continue accessing the service. 



Quote for the day:

"The final test of a leader is that he leaves behind him in other men, the conviction and the will to carry on." -- Walter Lippmann