Daily Tech Digest - August 19, 2022

As businesses embrace fully-remote work, does company culture suffer?

Companies that still want to move to a fully remote workplace should consider taking specific actions before doing so, according to Frana. Organizations should:Find out how your staff feels about remote work. Send out a survey to see which employees would want to work from home. Based on those results, you can determine the level of flexibility your company might want to offer. Make sure management is on board. One of the top factors in a remote work policy’s success is how managers feel about it. Explain the benefits of remote work, such as significant savings, the ability to attract and retain top talent from anywhere in the world, and increased productivity. Be intentional about company culture. One of the biggest challenges faced by remote teams is maintaining a strong company culture. In addition to thoughtfully evaluating your current workforce and deciphering what an effective remote-friendly business model looks like, it’s imperative company leaders and managers act with intention and prioritize culture.


Creating A Culture Of Cybersecurity

Businesses need to help their employees learn how to do things differently and train them to think of security as a business priority. Researchers have found that our working memory capacity is between three and five ‘chunks’ of information. This number starts to decline in our 30s, so a safe working figure is probably four chunks of information that the majority of your employees are able to keep in their short-term memory at any point. What does this mean for security? Basically, we need to keep things simple and easy to remember. Factsheets and training days may have their place, but on their own they’re not enough. Consider instead a strategy that uses a combination of continual awareness testing and roleplaying worse-case scenarios, to make security something that’s embedded as a mindset. ... CoEs act as sparring partners, allowing businesses to test solutions and assumptions around products, services and solutions. CoPs take this work to a larger audience, allowing employees to form communities to keep them up to date on the latest threats and remind them about their responsibility in keeping the network safe.


How Not to Waste Money on Cybersecurity

A common way enterprises waste money on IT security is by configuring their security plans and budgets based on the latest cybersecurity trends and following what other organizations are doing. “Each organization's security needs will differ based on their line of business, culture, people, policies, and goals,” says Ahmad Zoua, director of network IT and infrastructure at Guidepost Solutions, a security, investigations, and compliance firm. “What could be an essential security measure to one organization may have little value to another.” Poor planning and coordination can lead to needless duplication and redundancy. “In large organizations, we frequently see many products and platforms that have the same or similar capabilities,” says Doug Saylors, cybersecurity co-leader for technology research and advisory firm ISG. “This is typically the result of a lack of a cohesive cybersecurity strategy across IT functions and a disconnect with the business.” Organizations often layer security products on top of each other year after year.


An Experiment Showed that the Military Must Change Its Cybersecurity Approach

Weis says the Pentagon needs to measure its networks’ suitability for combat the same way it does for soldiers, sailors, tanks, and ships: through the concept of military readiness. Such an approach would mean prioritizing the biggest problems first, with second-tier or complicated ones set on slower paths to fixing. “There's 'ready to fight tonight.' But if you are a carrier strike group and you're deploying in three months, are you on a path to being ready? You manage your readiness on a day-to-day basis and it's a function of a whole bunch of things,” he said. “Do we have the right people? Are they trained? Are they qualified, or deficient? Do we have the equipment?” But Weis had to show that getting to a state of “readiness” in cyberspace is a matter of constant testing and drilling, not filling out compliance forms. He needed a safe space where he could understand readiness without exposing huge problems to adversaries or taking essential naval networks offline. He went to the Naval Postgraduate School, or NPS, in Monterey, California.


Bumpers in the bowling alley: the value of effective data management

According to John Peluso, chief product officer at AvePoint, a layered approach to security is an important way for businesses to achieve this goal. “The most direct thing that we have seen customers find value in – especially in the case of a malware event like ransomware – is the ability to access data,” he says. “The way to achieve this is by having a reliable business continuity strategy. “This becomes more difficult when you consider the data that is stored on someone else’s architecture – such as server content, cloud services, or anything with a synchronisation capability – is less covered by traditional enterprise data protection strategies. That’s new territory. While many businesses may think that because they have outsourced the architecture, they've also outsourced the responsibility, in some cases they haven’t. Businesses are becoming increasingly reliant on cloud services, so they need to be factored into the overall business continuity and resilience strategy.” This reliance on cloud services has, in some ways, been driven by the swift move to hybrid and remote working.


Feds Urge Healthcare Entities to Address Cloud Security

Most major healthcare organizations have become increasing dependent on cloud-based services, says John Houston, vice president of privacy and information security and associate counsel of integrated healthcare delivery organizations at the University of Pittsburgh Medical Center, which includes 40 hospitals and 800 outpatient sites. This reliance is in large part due to many IT vendors moving their services "exclusively to the cloud," he tells Information Security Media Group. "As such, ensuring the security and availability of cloud-based services - and associated information - is and will remain one of UPMC's top priorities. "Unfortunately, such assurance can be problematic for a variety of reasons, most notably being able to accurately assess the cloud vendor’s security posture. Further, getting meaningful contractual commitments is difficult - including financial coverage in the event of a breach," Houston says. Benjamin Denkers, chief innovation officer at privacy and security consulting firm CynergisTek, says he also thinks the biggest threat involving cloud is when organizations are reliant on the third parties and assume the environment is properly secured.


WebOps: A DevOps for Websites, but the Tools Let It Down

From an IT perspective, how is WebOps usually managed? According to Koenig, it depends on what the relationship is between the IT and marketing departments. In some cases, he said, the marketing department “earmarks budget to pay for developers who are technically in IT, but are dedicated to Marketing’s technology needs.” But in other cases, he’s seen “really strong central IT organizations” in which IT takes the lead — and in those cases, they tend to make use of their existing DevOps team and practices. In DevOps, CI/CD is a common part of the workflow. I asked if that’s the case with WebOps too, and if so how does CI/CD work in the web context? For static sites, Koenig replied, testing is done during the build (typically after content is updated). “The more challenging case is where people have content management,” he said, “so you have a living piece of software that’s running your live website, and that is connected to a database, it’s got some binary assets, images, PDFs, what have you. So you have people using that in production to post new content [but] you also want to be able to make design changes and add functionality.”


Why Are Robots So Important To Farmers?

Robots have revolutionized agriculture in recent years by increasing crop yields, decreasing labor costs, and simplifying the process of harvesting crops. The widespread use of robots in farming can be attributed to their ability to perform tasks that are either difficult or impossible for humans to do, such as moving around in tight spaces or reaching high up into plants. As a result of their increased efficiency and versatility, robots have become an essential part of modern agriculture. They are used to plant, harvest, package, and transport crops. They can also detect and avoid obstacles while performing tasks, significantly reducing the chances of human injury or equipment failure. In addition, robots are often equipped with sensors that allow them to gather information about crops and environmental conditions to optimize operations. Many plants are also resistant to insect damage or diseases, so robots may be used to control the insects or pathogens that often affect crops. Robots are also used in areas where humans cannot or would not wish to work, such as space exploration and deep-sea operations.


Five ways augmented analytics is protecting business revenue

Making sure the right person has the right information, at the right time, can be critical to a business. Suppose, for example, there’s an error in your app that prevents users in a particular country from logging in. Initially it may be just a drop in the ocean in terms of the company’s customer base, but over time it could result in user churn and a loss in revenue. Augmented analytics is able to identify such a problem early on from a minimal number of failed attempts and immediately flag it for the person who can fix it. This avoids lag time and sending messages to the wrong department, which are often overlooked by someone who misses its significance. Augmented analytics means potential revenue leaks can be plugged fast, and that means losses can be minimised. ... Keeping a customer satisfied is never easy. Human behaviour is hard enough to predict at the best of times. But augmented analytics can transform the way companies find and fix issues that are turning customers off. The technology identifies “hidden” trends, patterns and anomalies and alerts organisations faster than those anomalies would otherwise appear on traditional dashboards.


How Google Cloud blocked the largest Layer 7 DDoS attack at 46 million rps

The attack was stopped at the edge of Google’s network, with the malicious requests blocked upstream from the customer’s application. Before the attack started, the customer had already configured Adaptive Protection in their relevant Cloud Armor security policy to learn and establish a baseline model of the normal traffic patterns for their service. As a result, Adaptive Protection was able to detect the DDoS attack early in its life cycle, analyze its incoming traffic, and generate an alert with a recommended protective rule–all before the attack ramped up. The customer acted on the alert by deploying the recommended rule leveraging Cloud Armor’s recently launched rate limiting capability to throttle the attack traffic. They chose the ‘throttle’ action over a ‘deny’ action in order to reduce chance of impact on legitimate traffic while severely limiting the attack capability by dropping most of the attack volume at Google’s network edge. Before deploying the rule in enforcement mode, it was first deployed in preview mode, which enabled the customer to validate that only the unwelcome traffic would be denied while legitimate users could continue accessing the service. 



Quote for the day:

"The final test of a leader is that he leaves behind him in other men, the conviction and the will to carry on." -- Walter Lippmann

Daily Tech Digest - August 18, 2022

How Productivity And Surveillance Technology Can Create A Crisis For Businesses

“The use of productivity and surveillance technology can create crisis situations for companies and organizations due to the fact that they are not always clear on what they are getting into,” according to Jeff Colt, founder and CEO of Aquarium Fish City, an aquarium and aquatic website. “Companies oftentimes do not fully understand the ramifications of using these tools. For example, if a company decides to implement surveillance technology in the workplace, it needs to make sure that it is not violating any laws. Additionally, it needs to make sure that it is not infringing on any employee rights or privacy rights,” he said in a statement. “The use of productivity and surveillance technology can also create crisis situations because some people may not be comfortable with being monitored by their employers. This could lead some employees to feel like they are being treated unfairly as well as causing them to quit their jobs altogether,” Colt noted. ... “The use of these technologies can have the opposite intended effect when not managed properly,” Natalia Morozova, managing partner at Cohen, Tucker & Ades, an immigration law firm.


The benefits of regenerative architecture and unlocking the data potential in buildings

Regenerative architecture is “architecture that focuses on conservation and performance through a focused reduction on the environmental impacts of a building.” It can allow buildings to generate their own electricity and provides structures to sell excess energy back to the grid, creating a comprehensive, self-sustaining prosumer architecture. By producing their own energy through solar and wind turbines, these buildings significantly lower their carbon emissions and have more resilience in the face of extreme weather events. Some can even reverse environmental damage. But to fully leverage these opportunities, building owners and facility managers need smarter control of their energy. The right data, insights, and control help to make fast decisions and act on them. This is possible through the power digitalization of buildings. Buildings are responsible for 40% of the world’s CO2 emissions, second only to manufacturing. Yet, 30% of energy in buildings is wasted, often heating, cooling, and lighting empty spaces.


Quantum Physics Could Finally Explain Consciousness, Scientists Say

The existence of free will as an element of consciousness also seems to be a deeply non-deterministic concept. Recall that in mathematics, computer science, and physics, deterministic functions or systems involve no randomness in the future state of the system; in other words, a deterministic function will always yield the same results if you give it the same inputs. Meanwhile, a nondeterministic function or system will give you different results every time, even if you provide the same input values. “I think that’s why cognitive sciences are looking toward quantum mechanics. In quantum mechanics, there is room for chance,” Danielsson tells Popular Mechanics. “Consciousness is a phenomenon associated with free will and free will makes use of the freedom that quantum mechanics supposedly provides.” However, Jeffrey Barrett, chancellor’s professor of logic and philosophy of science at the University of California, Irvine, thinks the connection is somewhat arbitrary from the cognitive science side.


Eclypsium calls out Microsoft over bootloader security woes

The malicious shell activity involves visual elements that could potentially be detected by users on workstation monitors during the boot process; however, the vulnerabilities are especially dangerous for servers and industrial control systems that lack displays. The third vulnerability, CVE-2022-34302, is even harder to detect, as exploitation would remain virtually invisible to system owners. The researchers discovered that the New Horizon DataSys bootloader contains a small file that acts as a built-in bypass for Secure Boot; the 73 KB file disables the Secure Boot check without turning the protocol off completely, and it also has the ability to execute additional bypasses for security handlers. The discovery of the Horizon DataSys built-in Secure Boot bypass was definitely a "holy crap moment," Shkatov told SearchSecurity. The researchers said admin access is required for full exploitation, but they demonstrated an exploit during the presentation that used a phishing email and a malicious Word document that elevated their privileges to admin. 


Things You Should Know About Artificial Intelligence and Design

Nearly anyone who lives in the modern world produces data, often on the order of terabytes per day. We text our friends, stream videos, use fitness apps, ask Siri about the weather while we look out the window, walk by CCTV cameras, and the list goes on. Most of these data are unstructured, i.e. not organized in any clear order. Machine learning provides a way for computers to glean meaning from this lack of structure. As Armstrong puts it, “even now as you read, computers sift and categorize your data trails—both unstructured and structured — plunging deeper into who you are and what makes you tick.” How does it do this? The short answer is algorithms, statistical analysis, and prediction. Not sure what any of those words mean? ... As a researcher dedicated to demystifying emerging technology for landscape architects, I believe it is vital we get designers of all demographics and digital abilities to a shared understanding of what AI is so we can all better facilitate its continued permeation into practice. Big Data. Big Design. does this is in spades.


The effect of digital transformation on the CIO job

The CIO has always been a super-important role. I'd liken it [in the past] to the role of a flight engineer. You can't take off if the flight engineer is not on board; he or she serves a super-important purpose – it's mission critical, it's a lights-on operation. It's about delivering a really important capability: to keep the engine, the plane running, in this case, the enterprise running. We're seeing a big change happen because with digital transformation -- and using technology to deliver a new business value proposition -- the world is now starting to center around digital. And the role of the CIO is changing because he or she's now more and more becoming the pilot or the co-pilot, helping colleagues and their stakeholders and the rest of the executive committee to really reimagine the business value proposition on the back of new technology. And so that's one big change that we're going through because the [CIO] seat at the table, the role of the individual, is completely changing. I think another thing that's happening is that tech is no longer the long pole in the tent. And what I mean by that is when you do digital transformation, it isn't just the tech, it's the data. 


How Can Clinical Trials Benefit From Natural language processing (NLP)?

NLP can help identify patterns in participant responses that may indicate whether a treatment is effective. This information can improve the accuracy of trial results and make better decisions about which treatments to pursue. In addition, NLP can help researchers understand why certain participants respond well or poorly to a cure. This knowledge can help develop more effective treatments in the future. Several different NLP tools can be used in clinical trials. The most commonly used tools include machine learning algorithms, text mining techniques, and Word2Vec models. Each has advantages and disadvantages. Therefore, it’s crucial to pick the appropriate equipment for the job. Fortunately, many software platforms provide pre-built libraries that make it easy to use NLP in your research projects. Natural language processing (NLP) has significantly impacted clinical trials by helping researchers identify patterns in participant feedback. This has allowed for more informed decisions about modifying or improving treatments. 


New neuromorphic chip for AI on the edge, at a small fraction of the energy and size of today's computing platforms

The key to NeuRRAM's energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism. In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights.The neuron's connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure.


Monoliths to Microservices: 4 Modernization Best Practices

Surveys have shown that the days of manually analyzing a monolith using sticky notes on whiteboards take too long, cost too much and rarely end in success. Which architect or developer in your team has the time and ability to stop what they’re doing to review millions of lines of code and tens of thousands of classes by hand? Large monolithic applications need an automated, data-driven way to identify potential service boundaries. ... When everything was in the monolith, your visibility was somewhat limited. If you’re able to expose the suggested service boundaries, you can begin to make decisions and test design concepts — for example, identifying overlapping functionality in multiple services. ... We all know that naming things is hard. When dealing with monolithic services, we can really only use the class names to figure out what is going on. With this information alone, it’s difficult to accurately identify which classes and functionality may belong to a particular domain. ... What qualities suggest that functionality previously contained in a monolith deserves to be a microservice?


PC store told it can't claim full cyber-crime insurance after social-engineering attack

According to Chief District Judge Patrick Schiltz, who handed down the order, this case treads somewhat new legal ground. In the opinion, Schiltz noted that both SJ's lawsuit and Travelers' dismissal motion only cite three other cases, all from different jurisdictions, that "analyze the concept of direct causation in the context of computer or social-engineering fraud." All of those cases had a major difference in common, the court pointed out – none of them involved insurance policies that cover both computer and social engineering fraud, or make clear that the two types of fraud are different, mutually exclusive categories. This case, therefore, is less of a litmus test for the future of legal disagreements around social engineering insurance payouts, and more an examination of a close reading of contracts. "[Travelers'] Policy clearly anticipates – and clearly addresses – precisely the situation that gave rise to SJ Computers' loss, and the Policy bends over backwards to make clear that this situation involves social-engineering fraud, not computer fraud," Schiltz said.



Quote for the day:

"People only bring up your past when they are intimidated by your present." -- Joubert Botha

Daily Tech Digest - August 17, 2022

The second age of foundational technologies

We’re being overwhelmed by a tsunami of new foundational technology. Artificial intelligence (AI) is allowing computer systems to learn and solve problems that humans can’t. CRISPR is letting scientists edit genes and program DNA. Blockchain has brought new ways to think about money, contracts, and identity. The list of paradigm-shifting innovations goes on, and includes 3D printing, virtual reality, the metaverse, and civilian space flight. ... “When a technological revolution irrupts in the scene, it does not just add some dynamic new industries to the previous production structure. It provides the means for modernizing all the existing industries and activities.” Let that sink in for a minute. We are in the midst of “modernizing all the existing industries and activities.” That means enormous, wrenching, society-overhauling change. We see it all around us. Part of society is racing ahead with cryptocurrencies, social media, AI, and on and on—while others fight to hold on to a way of life they’ve always known. So, divides widen in society and politics, and between rich and poor, and rising and falling nations.


The IT Leader’s Guide to Helping Developers Avoid Burnout

In this new era of work, it's imperative for team members – from the CEO down – to have the ability to "read the virtual room" and have an understanding of what developers are thinking and feeling based on the tone and content of online interactions and conversations. Whether it’s Slack, Zoom, Teams or any other collaboration tool, it’s not the same as communicating face-to-face with someone who’s literally sitting at the same table. It’s possible to teach leaders the skills necessary to manage effectively in this environment, but we’re also seeing a rise of new and emerging leaders that are thriving because they place a priority on empathy and personal connections, even when most of the communication that takes place with their team members is digital. Paying attention to online social cues can help leaders determine if and when team members are stretching themselves too thin. Make no mistake, modern communication tools have helped make work more productive and efficient. But the best leaders are those who are able to analyze behavior on these tools so they can offer team members support when it’s needed most.


Edge computing: 4 key security issues for CIOs to prioritize

“Edge computing can create more complexity, and this can make securing the entire system more difficult,” says Jeremy Linden, the senior director of product management at Asimily. “Still, there is nothing inherently less secure about edge computing.” The big edge security risks should sound familiar – compromised credentials, malware and other malicious code, DDoS attacks, and so forth. What’s different is that these risks are now occurring farther and farther away from your primary or central environment(s) – the traditional network perimeter of yore is no longer your only concern. “Edge computing poses unique security challenges since you’re moving away from walled garden central cloud environments and everything is now accessible over the Internet,” says Priya Rajagopal, director, product management, Couchbase. The good news: Many of the same or similar tactics and tools organizations use to secure their cloud (especially hybrid cloud and/or multi-cloud) and on-premises environments still apply – they just need to be applied out at the edge.


Beyond Data Democracy: Why a Shift to Data Stewardship is Essential for Leadership Success

“Data democracy” has been heralded as the answer to this rapid cycle of innovation—but it is not enough. These initiatives have noble intentions: Sharing data and information about how users interact with products widely should, in theory, help groups across the business—from marketing to IT—operate from the same source of truth to stimulate better insights and better results faster. In reality, however, data democracy fails to yield those conclusive answers and shared goals. Too much raw data is difficult and time-consuming for teams to interpret, especially as the flow of digital signals has surged, and lacks the context needed to draw conclusions about the best path forward. Instead, the data is so oppressively overwhelming to manage that departments either give up or derive inaccurate conclusions—neither of which helps drive sound decisions and productive partnerships. Rather, these conditions create a new source of frustration and inefficiency for many engineering teams: the entire organization has access to information ripe for misinterpretation, even as expectations for results grow more urgent.


Microsoft Disrupts Russian Group's Multiyear Cyber-Espionage Campaign

Microsoft said its researchers have observed Seaborgium using stolen credentials to directly log in to victims' email accounts and steal their emails and attachments. In a few instances, the threat actor has also been observed configuring victim email accounts to forward emails to attacker-controlled addresses. "There have been several cases where Seaborgium has been observed using their impersonation accounts to facilitate dialogue with specific people of interest and, as a result, were included in conversations, sometimes unwittingly, involving multiple parties," ... As far as the disruption goes, the computing giant has now disabled accounts that Seaborgium actors have been using for victim reconnaissance, phishing, and other malicious activities. This includes multiple LinkedIn accounts. It has also developed detections for phishing domains associated with Seaborgium. F-Secure, which refers to the threat actor as the Callisto Group, has been tracking its activities since 2015. In a 2017 report, the security vendor had described Callisto Group as a sophisticated actor targeting governments, journalists, and think tanks in the EU and parts of eastern Europe.


What is challenging successful DevSecOps adoption?

Although adoption is low for now, the study also confirms potential growth in the industry with 62% of respondents saying their organization is actively evaluating use cases or has plans to implement DevSecOps. “As organizations adopt modern software development processes leveraging cloud platforms, they are looking to incorporate security processes and controls into developer workflows,” said Melinda Marks, senior analyst at ESG. “This research shows DevSecOps can be a game changer for companies, and there is no doubt we will see growing market traction over the next few years.” ... Companies believe that establishing a culture of collaboration and encouraging developers to leverage security best practices are nearly equal in importance to adopting DevSecOps tools. While it is common to anticipate cultural transformation to be a roadblock prior to adoption, those practicing DevSecOps report that technical limitations, such as data capture and analysis, are actually greater barriers to success.


Lawsuit Against FTC Intensifies Location Data Privacy Battle

The alleged dispute between Kochava and the FTC also comes in the wake of an executive order by President Biden in July, following the Supreme Court Roe v. Wade ruling. Among other actions, the executive order directed the FTC to consider options "to address deceptive or fraudulent practices, including online, and protect access to accurate information" (see: Biden Order Seeks to Protect Reproductive Data Privacy). Kochava claims the government is making the company a scapegoat. "The FTC's hope was to get a small, bootstrapped company to agree to a settlement - with the effect of setting precedent across the adtech industry and using that precedent to usurp the established process of Congress creating law. Kochava disagreed with this scheme and asked the federal court in Idaho to intervene," Mariam says. Also, among other allegations, Kochava's lawsuit claims the FTC’s proposed enforcement action would overstep its legal authority related to enforcing the FTC Act. The FTC declined ISMG's request for comment on the Kochava dispute.


IT Job Market Still Strong, But Economic Headwinds Complicate Picture

David Wagner, senior research director with Computer Economics, says despite the economic headwinds, about 60% of companies surveyed in the company's latest report said they were planning to increase headcount -- the largest percentage since the 2008 recession. “We continue to think this is a sign of more IT headcount growth in the next few years,” he explains. “It comes with a small caveat, of course, which is that the economic headwinds have gotten a little stronger over the last couple of months than they were at the beginning of the year.” However, from Wagner's perspective, IT has become so strategically important to every business that particularly when it comes to IT staffing companies are going to be as positive about their staffing and their IT spending as they can be. “It's not a surprise when Google and Microsoft both announced their most recent hiring freezes right around the time they were giving their quarterly earnings,” he says. “I think what's going to happen is there's going to be a pause as companies look around and figure out how bad things are going to be.”


When it comes to changing culture, think small

To change the way people work together, Martin argues, leaders must model the behaviors they want to see. “Literally the only way that I’ve seen culture change in the 42 years since I graduated from business school is when a leader sets out to demonstrate a different kind of behavior and makes that behavior work. Other people take their cues from that behavior, and, slowly but surely, the culture changes,” he says. “Kremlin-watching does not happen only in Moscow—it’s an incredibly powerful force. People watch the leadership and do what the leadership does.” A notable aspect of this approach is that it does not require a major initiative or investment. Instead, the culture change depends on micro-interventions: small adjustments to the structure, dynamics, or framing of interpersonal interactions, applied consistently over time. Martin helped orchestrate this kind of change while working with A.G. Lafley when he was the CEO of Procter & Gamble. Lafley wanted to revamp the consumer giant’s overly bureaucratic strategic process. 


How To Do Data Governance Better

Business initiatives are built on data, and your data governance program needs to support those objectives. For example, your business goal might be better data discovery to make business reporting more easily consumed or findable. You need to understand—and embrace—how data is consumed and used. This drives the core metrics and dashboards for validating data and checking data quality. When you scope out a core purpose or goal you’re trying to achieve in the first few months or quarters, then you won’t get overwhelmed. A data domain represents the logical grouping of data, either by item or area of interest, within an organization. With these high-level categories in place, organizations can assign accountability or responsibility for their data. Decentralized consumption models make it possible for different teams to define categories differently based on domain-level knowledge. They may use different names or metrics for the same data. A shared vocabulary across all departments standardizes how data is being used and accesdata sed, increasing alignment across departments and making use and accountability easier for everyone.



Quote for the day:

"You don't lead by pointing and telling people some place to go. You lead by going to that place and making a case." -- Ken Kesey

Daily Tech Digest - August 16, 2022

What are virtual routers and how can they lead to virtual data centers?

So what can you do with virtual router technology? The number one application, according to enterprises, is virtual networking, especially SD-WAN. All virtual-network technologies build an overlay network that has its own on- and off-ramp elements, which are really access routers. While many vendors offer this technology as appliances, most will also provide virtual routers for hosting on servers. That may make sense in the data center, where there are already racks of servers installed. Using virtual routers means that if one fails because its server went down, another can be easily spun up to take its place. Virtual routers are also essential in many cloud applications. Public cloud providers are understandably unenthusiastic about your sending your techs to install routers in their data centers, but you may need a virtual router there if you want to use virtual networking and SD-WAN optimally. For this type of cloud virtual routing, make sure your virtual router is compatible with the virtual network or SD-WAN technology you’re using.


Overcoming the roadblocks to passwordless authentication

There are a variety of roadblocks associated with moving to passwordless authentication. Foremost is that people hate change. End users push back when you ask them to abandon the familiar password-based login page and go through the rigamarole of registering a factor or device required for typical passwordless flows. Further, the app owners will often resist changing them to support passwordless flows. Overcoming these obstacles can be hard and expensive. It can also be exacerbated by the need to support more than one vendor’s passwordless solution. For example, most passwordless solutions pose app-level integration challenges that require implementing SDKs to support even simple flows. What happens if you want to support more than one solution? Or use your passwordless solution as both a primary identity and authentication provider and a step-up authentication provider? Or you want to layer in behavioral analytics? There is a way to address these human and technical challenges standing in the way of passwordless adoption using orchestration. Although common in virtualized computing stacks, orchestration is a new concept in identity architectures. 


Obsolescence management for IT leaders

Obsolescence will always be a by-product of continuous technological advances. The best way to improve cyber security and reduce downtime risks is to prepare effectively and take proactive steps to manage obsolescence. With a proactive obsolescence management plan in place, such as a cloud-first approach, businesses can track the lifespan of products. This ensures that IT and operational technology are always protected, improving productivity and reducing costs. To plan for the future, mid-size businesses should carry out an assessment of current infrastructure to understand the components of the IT and operational technology landscape and how these systems interact. Vendors will often publish end-of-life dates for hardware and software at least twelve months in advance. IT managers should look at how much they already spend on maintenance and whether downtime has occurred before. Understanding the risks can also help businesses make more informed decisions about their equipment. Businesses should consider how the failure of a hardware or software component will impact operations, costs and reputation, and whether the equipment is compatible with the rest of the system.


The pitfalls of poor data management – and how to avoid them

One of the challenges is how differences in patient profile can drastically change the costs associated with the same procedure. For example, a healthy patient with no comorbidities can likely receive a colonoscopy at an outpatient center. However, a patient with a medical condition such as hemophilia would need that same colonoscopy performed in the more costly hospital setting because of the complications that could potentially arise. This variability makes providing accurate estimates complicated. One way to potentially address this issue is to provide best-case and worst-case estimates. Getting to the point where these estimates can be made in real time, so that a procedure can safely continue when a complication arises without the concern of being fined or not properly reimbursed, is key. Also, while the regulations are well-intended, the reality is it is probably unnecessary to have the specified level of price transparency for every encounter. We need to focus on the most problematic events – those medical episodes that bankrupt people because they had no idea what their out-of-pocket costs would be.


Icelandic datacentres may lead the way to green IT

One of the main application areas where Icelandic datacentres make a lot of sense is in artificial intelligence (AI). With the advancement of AI methodologies such as unsupervised machine learning, for many applications, AI training and inference now needs to occur in the same location – they need to be colocated to facilitate iteration between the two processes. Foundational AI models run for weeks or months to do a re-education, so running a full training data set is very energy intensive. Businesses that depend on AI models do training continuously to get different versions of the models. For example, they might train for a specific customer who has a data set they want trained against. ... A second type of application where Icelandic datacentres make sense is in financial services. Although trading applications require very low latency and are usually placed close to exchanges in edge or metro locations, they depend on the output of larger, more compute intensive applications. These applications use thousands of computers 24 hours a day to run Monte Carlo simulations and Markov Chain analysis to make predictions about market trends. 


Automotive hacking – the cyber risk auto insurers must consider

Cyber exposures are a relatively new frontier for auto insurance. Traditional risk considerations have revolved around liability or theft, but those have evolved amid the increasingly connected landscape for vehicles. “We must evaluate the types of losses happening and what’s causing those losses. Are they related to malfunctions in a vehicle? Are they related to hacking? It’s a challenge for insurers even to determine the ultimate cause of a loss,” said Perfetto. “If there was an accident, and it wasn’t the driver’s fault per se but more of a vehicle malfunction, that may not be easily attributed. If there was a hacking incident, that might not be easy to discover.” ... “We have seen data that supports reduction in accident frequency related to certain technology added to a vehicle. But we have also seen the cost of replacing some more advanced technologies increase. Something as simple as a rear end or a minor dent in your bumper that used to be an easy and relatively inexpensive item to fix has become much more costly,” Perfetto noted.


Are debt financings the new venture round for fintechs startups?

You have to plan ahead for venture debt. Put it in place relatively soon after an equity financing. That way there is no adverse selection for the lenders; everyone (founders, VCs and lenders) around the table is happy at that time. If you try to put something in place with less than six months of cash, you will not be able to get debt. If you put it in place after an equity round, you can draw it down way into the future — that’s called a forward commitment/drawdown. That gives the startup a lot of optionality. It’s super important to understand all the terms. Often, founders don’t realize there are things like funding MACs, investor abandonment clauses, etc. These terms can be used by the lender to block the startup from either drawing down the money or creating a default after the money has been drawn. Either way, the company is in trouble and can’t count on the capital. So you really need to know your lender, have your VCs know your lender and pay attention to your terms. This is why we created the Sample Venture Debt Term Sheet, to explain all the terms.


The cybersecurity skills gap is ‘not just about addressing headcount’

From a security perspective, I’m hoping an increase in connected systems will lead to less human-error-related cyberattacks. This will largely revolve around increasing API accessibility and integration. Not only do better integrations allow for employees to do better, more efficient work, it also enables a more secure infrastructure throughout your entire organisation. For example, when APIs are accessible throughout the application ecosystem, this allows for systems to be configured through code, helping us introduce streamlined changes to configuration rather than having to go into specific applications. From a security perspective, this enables us to do advanced things like segregation of duty and activity monitoring at scale. These benefits are a large part of why we prioritise connectivity and API accessibility at Templafy, both in our own tech stack and our platform. We know it not only benefits our own team, but also our customers.


IT leadership: Why adaptability matters

The rise of technology has incentivized industries to adapt in recent years. Still, that push is becoming a pull as realities like The Great Resignation and remote work push organizations to change how they interact with and relate to their customers and employees. The return on investment of developing adaptability in organizations comes from talent attraction and retention, increased innovation, improved employee engagement – and potentially, organizational survival. In the past, leaders have been able to draw from models such as William Bridges’ Transitions to understand adaptability. But while these approaches may help us to understand how a person adapts and what behaviors leaders should expect as people move through change, few have explored the why. And without that knowledge, it can be challenging for leaders to create supportive, psychologically healthy workplaces that support people as they adapt. Because adapt they must. The key to unlocking the potential of emotional intelligence is first to understand the construct and then identify the areas for development. The same goes for AQ. 


Developer Experience vs. User Experience

Retaining developers requires more than first impressions. Just as good UX needs to be evaluated, refined, and tested over time, good DX is an investment in the long term. You won’t know how well you’ve succeeded without using analytics to evaluate your DX and test changes. Monitoring your API helps you identify users who have not been able to successfully make API calls, find patterns of success and failure for developers, and see how different users are engaging with your product over time. While tracking UX metrics is relatively straightforward for products focused on end-users, DX metrics differ in important ways. You need to develop a good strategy for API analytics so that you track relevant business value metrics while avoiding vanity metrics. ... You need to understand DX when you build products for developers so that you can attract developer users, inspire their confidence and creativity, and support their increasingly complex integrations over time. Building good UX and DX can be challenging, but with the right analytics stack, you can monitor your API and use metrics to craft the perfect API developer experience.



Quote for the day:

"Taking charge of your own learning is a part of taking charge of your life, which is the sine qua non in becoming an integrated person." -- Warren G. Bennis

Daily Tech Digest - August 15, 2022

How critical infrastructure operators can secure OT data

OT data is foundational to critical areas of operations – a breach to OT systems can risk core business process operations and expose critical data. There is still some maturity required among organisations in prioritising backup and data protection as part of their organisation’s security posture and planned response to a cyber attack. Based on research we did in April 2022 across the UK, US and Australia of over 2,000 IT decision-makers and SecOps professionals, only 54% of IT decision-makers said backup and data protection was a top priority and a crucial capability, while only 38% of SecOps respondents said the same. Many organisations focus on “protect controls” to reduce the likelihood of a breach, but they also need to look at security controls that limit the impact of a breach. This means ensuring your recovery capabilities can meet aggressive recovery time and point objectives, so that you can resume business operations while minimising the impact of a ransomware attack.


Uber Open-Sourced Its Highly Scalable and Reliable Shuffle as a Service for Apache Spark

Spark is shuffling data on local machines by default. It causes challenges while the scale is getting very large (about 10,000 nodes on Uber Scale). At this scale of operation, major reliability and scalability problems happen. One main challenging area in using Spark at Uber scale is system reliability. Machines are generating terabytes of data to shuffle every day. This causes disk SSDs to wear out faster while they are not designed and optimized for high IO workloads. SSDs are designed to work generally for 3 years but in heavy Spark shuffling operations, they are working for about 6 months. Also, lots of failures happen for shuffling operations which decreases system reliability. The other challenge in this area is scalability. Applications could produce lots of data that could not be fitted on a single machine. It causes a full disk exception problem. ... To resolve the mentioned issues, engineers at Uber architected and designed Remote Shuffle Service (RSS) as shown in the following diagrams. It solves the mentioned reliability and scalability problems in the common Spark shuffling operation.


SMS-Based Multi-Factor Authentication: What Could Go Wrong? Plenty

“We call it smishmash because it’s a mashup of techniques,” explains Olofsson. “SMS for two-factor authentication [2FA] is broken. This is not news; it’s been broken since the inception. It was never intended for this use. We’ve been spoofing text messages since as long as we’ve been hacking. It’s just that now we’re seeing weaponization.” Text messages have a higher implicit trust than email scams, and hence a higher success rate, he notes. Olofsson reviewed several newsworthy breaches involving smishing and 2FA, including a major theft of NFTs from OpenSea. “We see a huge increase in the number of smishing attacks,” he says. “How many of you have got an unsolicited text in the last week? Your phone numbers are increasingly being leaked.” "What we have done [is combine] a search of the clear-net and darknet to create a huge database," says Byström. "Doing this research, we got so much spam,” adds Olofsson. "Even ‘do you want to buy the Black Hat attendee list?’ We got the price down below $100."


Sloppy Use of Machine Learning Is Causing a ‘Reproducibility Crisis’ in Science

Kapoor and Narayanan warn that AI’s impact on scientific research has been less than stellar in many instances. When the pair surveyed areas of science where machine learning was applied, they found that other researchers had identified errors in 329 studies that relied on machine learning, across a range of fields. Kapoor says that many researchers are rushing to use machine learning without a comprehensive understanding of its techniques and their limitations. Dabbling with the technology has become much easier, in part because the tech industry has rushed to offer AI tools and tutorials designed to lure newcomers, often with the goal of promoting cloud platforms and services. “The idea that you can take a four-hour online course and then use machine learning in your scientific research has become so overblown,” Kapoor says. “People have not stopped to think about where things can potentially go wrong.” Excitement around AI’s potential has prompted some scientists to bet heavily on its use in research. Tonio Buonassisi, a professor at MIT who researches novel solar cells, uses AI extensively to explore novel materials. 


Why edge is eating the world

The edge is a distributed system. And when dealing with data in a distributed system, the laws of the CAP theorem apply. The idea is that you will need to make tradeoffs if you want your data to be strongly consistent. In other words, when new data is written, you never want to see older data anymore. Such a strong consistency in a global setup is only possible if the different parts of the distributed system are joined in consensus on what just happened, at least once. That means that if you have a globally distributed database, it will still need at least one message sent to all other data centers around the world, which introduces inevitable latency. Even FaunaDB, a brilliant new SQL database, can’t get around this fact. Honestly, there’s no such thing as a free lunch: if you want strong consistency, you’ll need to accept that it includes a certain latency overhead. Now you might ask, “But do we always need strong consistency?” The answer is: it depends. There are many applications for which strong consistency is not necessary to function. One of them is, for example, this petite online shop you might have heard of: Amazon.


How To Protect Yourself With A More Secure Kind Of Multi-Factor Authentication

According to the Cybersecurity and Infrastructure Security Agency, “Multi-factor authentication is a layered approach to securing data and applications where a system requires a user to present a combination of two or more credentials to verify a user’s identity for login.” When we log into an online account, we’re often aiming to thwart an attacker or hacker using extra layers of verification — or locks. ... First, let’s talk about the marketing of MFA. If your MFA provider touts itself as unhackable or 99% unhackable, they are spouting multi-factor B.S. and you should find another provider. All MFA is hackable. The goal is to have a less hackable, more phishing resistant, more resilient MFA. Registering a phone number leaves the MFA vulnerable to SIM-swapping. If your MFA does not have a good backup mechanism, then that MFA option is vulnerable to loss. ... Multi-factor authentication is more securely accomplished with an authenticator app, smart card or hardware key, like a Yubikey. So if you have an app-based or hardware MFA, you’re good, right? Well, no. 


Met Police ramps up facial recognition despite ongoing concerns

Russell acknowledges that there are exceptional circumstances in which LFR could be reasonably deployed – for instance, under the threat of an imminent terrorist attack – but says the technology is ripe for abuse, especially in the context of poor governance combining with concerns over the MPS’s internal culture raised by the policing inspectorate, which made the “unprecedented” decision to place the force on “special measures” in June 2022 over a litany of systemic failings. “While there are many police officers who have public service rippled through them, we have also seen over these last months and years of revelations about what’s been going on in the Met, that there are officers who are racist, who have been behaving in ways that are completely inappropriate, with images [and] WhatsApp messages being shared that are racist, misogynist, sexist and homophobic,” she said, adding that the prevalence of such officers continuing to operate unidentified adds to the risks of the technology being abused when it is deployed.


Many ZTNA, MFA Tools Offer Little Protection Against Cookie Session Hijacking Attacks

The researchers recently examined technologies from Okta, Slack, Monday, GitHub, and dozens of other companies to see what protection they offered against attackers using stolen session cookies to take over accounts, impersonate legitimate users, and move laterally in compromised environments. ... Okta described such attacks as an issue for which it was not directly responsible. "As a web application, Okta relies on the security of the browser and operating system environment to protect against endpoint attacks such as malicious browser plugins or cookie stealing," Mesh quoted Okta as saying. Most of the other vendors that Mesh contacted about the issue similarly distanced themselves from any responsibility for cookie theft, reuse, and session-hijacking attacks, says Netanel Azoulay, co-founder and CEO of Mesh Security. "We believe that this issue is the complete responsibility of the vendors on our list — including IdP and ZTNA solutions," Azoulay insists. 


Edge computing: 4 pillars for CIOs and IT leaders

By definition, edge computing sort of takes the notion of a centralized IT network environment and shatters it into hundreds or even thousands (or more) of smaller environments. Picture the classic image of a room full of servers, but now every server on every rack sits in its own room – or in many cases no room at all, but on an oil rig or manufacturing floor or cell tower. Almost regardless of your edge use cases, it’s going to entail moving lots of the stuff that has long been the domain of IT – infrastructure/compute, devices, applications, data – away from your IT environment, however that’s currently defined. Properly managing all of that stuff requires some forethought. “You’re probably going to have a lot of devices out on the edge and there probably isn’t much in the way of local IT staff there,” says Gordon Haff, technology evangelist, Red Hat. “So automation and management are essential for tasks like mass configuration, taking actions in response to events, and centralized application updates.”


CIOs Turn to the Cloud as Tech Budgets Come Under Scrutiny

Although investment in cloud tech is booming, CIOs should also be keeping a critical eye on managing cloud costs, which can quickly spiral out of control. To ensure that cloud costs are properly controlled, it is important for CIOs to have tools that enable them to tightly monitor and act on unused resources -- there are no cost benefits if these idle resources remain on the cloud balance sheet. JupiterOne CISO Sounil Yu says the engineering team should shut down these resources soon after they become idle and rebuild the resources through automation when they are needed again. “CIOs should enforce this routine because in addition to reducing costs, it improves the overall resiliency of the organization to unexpected failures since it forces engineers to practice rebuilding regularly,” he says. Dennis Monner, chief commercial officer at Aryaka, agrees cloud investment is going up, and points out there are two parts of this. “First, CIOs need to understand their true cloud costs versus bringing it back in-house, which also introduces risk and expenses,” he said. “This needs to be a true apples-to-apples comparison.”



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry

Daily Tech Digest - August 14, 2022

Identity crisis: Artificial intelligence and the flawed logic of ‘mind uploading’

We can think of the copy as a digital clone or twin, but it would not be you. It would be a mental copy of you, including all of your memories up to the moment your brain was scanned. But from that time on, the copy would generate its own memories inside whatever simulated world it was installed in. It might interact with other simulated people, learning new things and having new experiences. Or maybe it would interact with the physical world through robotic interfaces. At the same time, the biological you would be generating new memories and skills and knowledge. In other words, your biological mind and your digital copy would immediately begin to diverge. They would be identical for one instant and then grow apart. Your skills and abilities would diverge. Your knowledge and understanding would diverge. Your personality and objectives would diverge. After a few years, there would be significant differences. And yet, both versions would “feel like the real you.” This is a critical point – the copy would have the same feelings of individuality that you have. 


It’s Time to Normalize Cyberattack Data

The hope is that as an open standard, it will be adopted and used with existing security standards and processes. Then, as developers and users incorporate OCSF into their products and processes, security data normalization will become simpler and less burdensome. This, in turn, will enable security teams to do better at analyzing attack data, identifying threats, and defending their organizations from cyberattacks. Ultimately, John Graham-Cumming, Cloudflare’s CTO, said in a statement, “Every business deserves a simple, straightforward way to analyze and understand the security landscape — and that starts with their data. By participating in the OCSF, we hope to help the entire security industry focus on doing the work that matters instead of wasting countless hours and resources on formatting data.” I hope this is true. I hate wasting time. And time is one thing we never have enough of when we’re dealing with a security problem. If OSCF can succeed in its aims, it will be a major step forward in dealing with large-scale security problems.


3 Expert-Backed Strategies for Boosting Your Entrepreneurial Energy

Entrepreneurs are a special breed of overthinkers. We're constantly making decisions, so we have to think fast on our feet. But we also must take the time to weigh our options out properly. And so we think up all possible scenarios: the good, the bad and the ugly. This used to be one of my biggest hurdles when starting. What if this client falls through? What if users aren't satisfied with our product? What if we can't attract enough attention and be sustainable? What will I do? My mind was my biggest enemy. Consequently, after a long night of tossing and turning, I'd wake up unmotivated to start the day. Here's the thing I've learned since: energy thrives on confidence. And confidence only comes when you believe in your abilities. As co-authors Linda Bloom, L.C.S.W., and Charlie Bloom, M.S.W.. write in Psychology Today, "Self-trust is not trusting yourself to know all the answers, nor is it believing that you will always do the right things," they explain. "It's having the conviction that you will be kind and respectful to yourself regardless of the outcome of your efforts."


4 Flaws, Other Weaknesses Undermine Cisco ASA Firewalls

"If you have access to the virtual machine, you have full access inside the network, but more importantly, you can sniff all the traffic going through, including decrypted VPN traffic," Baines says. "So, it is a really great place for an attacker to chill out and pivot, but probably just sniff for credentials or monitor the traffic flowing into the network." Baines discovered the issue when he was investigating the Cisco ASDM to get "a level set on how the GUI (graphical user interface) works" and pull apart the protocol, he says. A component installed on administrators' systems, known as the ASDM launcher, could be used by attackers to deliver malicious code in Java class files or through the ASDM Web portal. As a result, attackers could create a malicious ASDM package to compromise the administrator's system through installers, malicious Web pages, and malicious Java components. The ASDM vulnerabilities discovered by Rapid7 include a known vulnerability (CVE-2021-1585) that allows an unauthenticated remote code execution (RCE) attack, which Cisco claimed was patched in a recent update, but Baines discovered it remained.


A Shift in Computer Vision Is Coming

Is computer vision about to reinvent itself, again? Ryad Benosman, professor of ophthalmology at the University of Pittsburgh and an adjunct professor at the CMU Robotics Institute, believes that it is. As one of the founding fathers of event-based vision technologies, Benosman expects that neuromorphic vision — computer vision based on event-based cameras — will be the next direction computer vision will take. “Computer vision has been reinvented many, many times,” Benosman said. “I’ve seen it reinvented twice at least, from scratch, from zero.” Benosman cited the shift in the 1990s from image processing with a bit of photogrammetry to a geometry-based approach and then to today’s rapid advance toward machine learning. Despite those changes, modern computer-vision technologies are still predominantly based on image sensors — cameras that produce an image similar to what the human eye sees. According to Benosman, until the image-sensing paradigm is no longer useful, it holds back innovation in alternative technologies. The development of high-performance processors, such as GPUs, delay the need to look for alternative solutions and thus have prolonged this effect.


What’s the Go programming language really good for?

Go has been compared to scripting languages like Python in its ability to satisfy many common programming needs. Some of this functionality is built into the language itself, such as “goroutines” for concurrency and threadlike behavior, while additional capabilities are available in Go standard library packages, like Go’s http package. Like Python, Go provides automatic memory management capabilities including garbage collection. Unlike scripting languages such as Python, Go code compiles to a fast-running native binary. And unlike C or C++, Go compiles extremely fast—fast enough to make working with Go feel more like working with a scripting language than a compiled language. Further, the Go build system is less complex than those of other compiled languages. It takes few steps and little bookkeeping to build and run a Go project. ... Go binaries run more slowly than their C counterparts, but the difference in speed is negligible for most applications. Go performance is as good as C for the vast majority of work, and generally much faster than other languages known for speed of development.


Ex-CIA security boss predicts coming crackdown on spyware

Protecting individuals' privacy is something all of us — including elected officials — should be very concerned about, Mestrovich said. "I would expect, going forward, there will be either executive orders or legislation passed to ensure that the civil liberties and the rights that we all expect to data privacy and privacy of our own activities are kept sacrosanct," he added. As a CISO himself, ransomware is top of mind. "Ransomware is a huge threat to just our economic viability," Mestrovich told us, citing a Cybersecurity Ventures forecast that global cybercrime costs to grow by 15 percent per year over the next five years, reaching $10.5 trillion annually by 2025. "Clearly, the cyber criminals have monetized the theft of data or depriving an organization use of its data," Mestrovich said. "Until we can do something to prevent the economic gain that they have from the theft of data or the denial of an organization's access to his data. This is only going to increase"


Urgent security warning issued as hackers shift ransomware attacks to small businesses

The Director of the NCSC Richard Browne said that in the past these groups typically focussed on larger organisations. However they have now shifted focus to smaller entities. “We have been dealing with the threat of ransomware for some time; however, we have seen a noticeable change in the tactics of criminal ransomware groups, whereby rather than largely focussing on Governments, critical infrastructure and big business, they are increasingly targeting smaller businesses. “This is a trend that has been observed globally, and Ireland is no exception with several businesses becoming victims of these groups in the past number of weeks,” he said. Richard Browne said the letter sent to IBEC by the NCSC and GNCCB has outlined guidance for small companies and how they can deal with the attack. “Whilst we appreciate that many business owners are understandably nervous of the threat ransomware poses, there are some straightforward security measures that can be put in place to ensure that an organisations data and systems remain secure,” he added.


Computer Vision and Deep Learning for Agriculture

AI applications can analyze weather and soil conditions, water usage, and risk of diseases to help farmers reduce the risk of crop failures by providing valuable insights like the right time to sow seeds, right crop/seed choices. Detecting plant diseases, weeds, and pests beforehand can reduce the use of chemicals like herbicides and pesticides and bring cost savings. Many companies have started using robots that can eliminate 80% of the volume of the substances generally sprayed on the crops and bring down the expenditure on herbicides by 90% Further, the use of AI in harvesting, picking, and vacuum apparatus can quickly identify the location of the harvestable produce and help determine the proper fruits. The Strawberry Harvest is a classic example. ... With satellite imagery and weather data, AI applications can analyze the market trends, like which crops are in demand and which are more profitable. This helps the farmers to increase their revenue by guiding them about future price patterns, demand level, type of crop to sow for maximum benefit, pesticide usage, etc.


Rethinking Web Application Firewalls

The vulnerabilities are so numerous now and cloud native applications have larger attack surfaces with no way to mitigate vulnerabilities using traditional means, Tiperneni explained. “It’s no longer sufficient to throw out a report that tells you about all the vulnerabilities in your system,” Tiperneni said. “Because that report is not actionable. People operating the services are discovering that the amount of time and effort it takes to remediate all these vulnerabilities is incredible, right? So they’re looking for some level of prioritization in terms of where to start.” And the onus is on the user to mitigate the problem, Tiperneni said. Those customers have to think about the blast radius of the vulnerability and its context in the system. The second part: How to manage the attack surface. In this world of cloud native applications, customers are discovering very quickly, that trying to protect every single thing, when everything has access to everything else, is an almost impossible task, Tiperneni said.



Quote for the day:

"The Leadership Seduction of storytelling invites self-pity, exaggerates one's importance, and encourages inaction." -- Catherine Robinson-Walker