Daily Tech Digest - August 18, 2022

How Productivity And Surveillance Technology Can Create A Crisis For Businesses

“The use of productivity and surveillance technology can create crisis situations for companies and organizations due to the fact that they are not always clear on what they are getting into,” according to Jeff Colt, founder and CEO of Aquarium Fish City, an aquarium and aquatic website. “Companies oftentimes do not fully understand the ramifications of using these tools. For example, if a company decides to implement surveillance technology in the workplace, it needs to make sure that it is not violating any laws. Additionally, it needs to make sure that it is not infringing on any employee rights or privacy rights,” he said in a statement. “The use of productivity and surveillance technology can also create crisis situations because some people may not be comfortable with being monitored by their employers. This could lead some employees to feel like they are being treated unfairly as well as causing them to quit their jobs altogether,” Colt noted. ... “The use of these technologies can have the opposite intended effect when not managed properly,” Natalia Morozova, managing partner at Cohen, Tucker & Ades, an immigration law firm.


The benefits of regenerative architecture and unlocking the data potential in buildings

Regenerative architecture is “architecture that focuses on conservation and performance through a focused reduction on the environmental impacts of a building.” It can allow buildings to generate their own electricity and provides structures to sell excess energy back to the grid, creating a comprehensive, self-sustaining prosumer architecture. By producing their own energy through solar and wind turbines, these buildings significantly lower their carbon emissions and have more resilience in the face of extreme weather events. Some can even reverse environmental damage. But to fully leverage these opportunities, building owners and facility managers need smarter control of their energy. The right data, insights, and control help to make fast decisions and act on them. This is possible through the power digitalization of buildings. Buildings are responsible for 40% of the world’s CO2 emissions, second only to manufacturing. Yet, 30% of energy in buildings is wasted, often heating, cooling, and lighting empty spaces.


Quantum Physics Could Finally Explain Consciousness, Scientists Say

The existence of free will as an element of consciousness also seems to be a deeply non-deterministic concept. Recall that in mathematics, computer science, and physics, deterministic functions or systems involve no randomness in the future state of the system; in other words, a deterministic function will always yield the same results if you give it the same inputs. Meanwhile, a nondeterministic function or system will give you different results every time, even if you provide the same input values. “I think that’s why cognitive sciences are looking toward quantum mechanics. In quantum mechanics, there is room for chance,” Danielsson tells Popular Mechanics. “Consciousness is a phenomenon associated with free will and free will makes use of the freedom that quantum mechanics supposedly provides.” However, Jeffrey Barrett, chancellor’s professor of logic and philosophy of science at the University of California, Irvine, thinks the connection is somewhat arbitrary from the cognitive science side.


Eclypsium calls out Microsoft over bootloader security woes

The malicious shell activity involves visual elements that could potentially be detected by users on workstation monitors during the boot process; however, the vulnerabilities are especially dangerous for servers and industrial control systems that lack displays. The third vulnerability, CVE-2022-34302, is even harder to detect, as exploitation would remain virtually invisible to system owners. The researchers discovered that the New Horizon DataSys bootloader contains a small file that acts as a built-in bypass for Secure Boot; the 73 KB file disables the Secure Boot check without turning the protocol off completely, and it also has the ability to execute additional bypasses for security handlers. The discovery of the Horizon DataSys built-in Secure Boot bypass was definitely a "holy crap moment," Shkatov told SearchSecurity. The researchers said admin access is required for full exploitation, but they demonstrated an exploit during the presentation that used a phishing email and a malicious Word document that elevated their privileges to admin. 


Things You Should Know About Artificial Intelligence and Design

Nearly anyone who lives in the modern world produces data, often on the order of terabytes per day. We text our friends, stream videos, use fitness apps, ask Siri about the weather while we look out the window, walk by CCTV cameras, and the list goes on. Most of these data are unstructured, i.e. not organized in any clear order. Machine learning provides a way for computers to glean meaning from this lack of structure. As Armstrong puts it, “even now as you read, computers sift and categorize your data trails—both unstructured and structured — plunging deeper into who you are and what makes you tick.” How does it do this? The short answer is algorithms, statistical analysis, and prediction. Not sure what any of those words mean? ... As a researcher dedicated to demystifying emerging technology for landscape architects, I believe it is vital we get designers of all demographics and digital abilities to a shared understanding of what AI is so we can all better facilitate its continued permeation into practice. Big Data. Big Design. does this is in spades.


The effect of digital transformation on the CIO job

The CIO has always been a super-important role. I'd liken it [in the past] to the role of a flight engineer. You can't take off if the flight engineer is not on board; he or she serves a super-important purpose – it's mission critical, it's a lights-on operation. It's about delivering a really important capability: to keep the engine, the plane running, in this case, the enterprise running. We're seeing a big change happen because with digital transformation -- and using technology to deliver a new business value proposition -- the world is now starting to center around digital. And the role of the CIO is changing because he or she's now more and more becoming the pilot or the co-pilot, helping colleagues and their stakeholders and the rest of the executive committee to really reimagine the business value proposition on the back of new technology. And so that's one big change that we're going through because the [CIO] seat at the table, the role of the individual, is completely changing. I think another thing that's happening is that tech is no longer the long pole in the tent. And what I mean by that is when you do digital transformation, it isn't just the tech, it's the data. 


How Can Clinical Trials Benefit From Natural language processing (NLP)?

NLP can help identify patterns in participant responses that may indicate whether a treatment is effective. This information can improve the accuracy of trial results and make better decisions about which treatments to pursue. In addition, NLP can help researchers understand why certain participants respond well or poorly to a cure. This knowledge can help develop more effective treatments in the future. Several different NLP tools can be used in clinical trials. The most commonly used tools include machine learning algorithms, text mining techniques, and Word2Vec models. Each has advantages and disadvantages. Therefore, it’s crucial to pick the appropriate equipment for the job. Fortunately, many software platforms provide pre-built libraries that make it easy to use NLP in your research projects. Natural language processing (NLP) has significantly impacted clinical trials by helping researchers identify patterns in participant feedback. This has allowed for more informed decisions about modifying or improving treatments. 


New neuromorphic chip for AI on the edge, at a small fraction of the energy and size of today's computing platforms

The key to NeuRRAM's energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism. In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights.The neuron's connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure.


Monoliths to Microservices: 4 Modernization Best Practices

Surveys have shown that the days of manually analyzing a monolith using sticky notes on whiteboards take too long, cost too much and rarely end in success. Which architect or developer in your team has the time and ability to stop what they’re doing to review millions of lines of code and tens of thousands of classes by hand? Large monolithic applications need an automated, data-driven way to identify potential service boundaries. ... When everything was in the monolith, your visibility was somewhat limited. If you’re able to expose the suggested service boundaries, you can begin to make decisions and test design concepts — for example, identifying overlapping functionality in multiple services. ... We all know that naming things is hard. When dealing with monolithic services, we can really only use the class names to figure out what is going on. With this information alone, it’s difficult to accurately identify which classes and functionality may belong to a particular domain. ... What qualities suggest that functionality previously contained in a monolith deserves to be a microservice?


PC store told it can't claim full cyber-crime insurance after social-engineering attack

According to Chief District Judge Patrick Schiltz, who handed down the order, this case treads somewhat new legal ground. In the opinion, Schiltz noted that both SJ's lawsuit and Travelers' dismissal motion only cite three other cases, all from different jurisdictions, that "analyze the concept of direct causation in the context of computer or social-engineering fraud." All of those cases had a major difference in common, the court pointed out – none of them involved insurance policies that cover both computer and social engineering fraud, or make clear that the two types of fraud are different, mutually exclusive categories. This case, therefore, is less of a litmus test for the future of legal disagreements around social engineering insurance payouts, and more an examination of a close reading of contracts. "[Travelers'] Policy clearly anticipates – and clearly addresses – precisely the situation that gave rise to SJ Computers' loss, and the Policy bends over backwards to make clear that this situation involves social-engineering fraud, not computer fraud," Schiltz said.



Quote for the day:

"People only bring up your past when they are intimidated by your present." -- Joubert Botha

Daily Tech Digest - August 17, 2022

The second age of foundational technologies

We’re being overwhelmed by a tsunami of new foundational technology. Artificial intelligence (AI) is allowing computer systems to learn and solve problems that humans can’t. CRISPR is letting scientists edit genes and program DNA. Blockchain has brought new ways to think about money, contracts, and identity. The list of paradigm-shifting innovations goes on, and includes 3D printing, virtual reality, the metaverse, and civilian space flight. ... “When a technological revolution irrupts in the scene, it does not just add some dynamic new industries to the previous production structure. It provides the means for modernizing all the existing industries and activities.” Let that sink in for a minute. We are in the midst of “modernizing all the existing industries and activities.” That means enormous, wrenching, society-overhauling change. We see it all around us. Part of society is racing ahead with cryptocurrencies, social media, AI, and on and on—while others fight to hold on to a way of life they’ve always known. So, divides widen in society and politics, and between rich and poor, and rising and falling nations.


The IT Leader’s Guide to Helping Developers Avoid Burnout

In this new era of work, it's imperative for team members – from the CEO down – to have the ability to "read the virtual room" and have an understanding of what developers are thinking and feeling based on the tone and content of online interactions and conversations. Whether it’s Slack, Zoom, Teams or any other collaboration tool, it’s not the same as communicating face-to-face with someone who’s literally sitting at the same table. It’s possible to teach leaders the skills necessary to manage effectively in this environment, but we’re also seeing a rise of new and emerging leaders that are thriving because they place a priority on empathy and personal connections, even when most of the communication that takes place with their team members is digital. Paying attention to online social cues can help leaders determine if and when team members are stretching themselves too thin. Make no mistake, modern communication tools have helped make work more productive and efficient. But the best leaders are those who are able to analyze behavior on these tools so they can offer team members support when it’s needed most.


Edge computing: 4 key security issues for CIOs to prioritize

“Edge computing can create more complexity, and this can make securing the entire system more difficult,” says Jeremy Linden, the senior director of product management at Asimily. “Still, there is nothing inherently less secure about edge computing.” The big edge security risks should sound familiar – compromised credentials, malware and other malicious code, DDoS attacks, and so forth. What’s different is that these risks are now occurring farther and farther away from your primary or central environment(s) – the traditional network perimeter of yore is no longer your only concern. “Edge computing poses unique security challenges since you’re moving away from walled garden central cloud environments and everything is now accessible over the Internet,” says Priya Rajagopal, director, product management, Couchbase. The good news: Many of the same or similar tactics and tools organizations use to secure their cloud (especially hybrid cloud and/or multi-cloud) and on-premises environments still apply – they just need to be applied out at the edge.


Beyond Data Democracy: Why a Shift to Data Stewardship is Essential for Leadership Success

“Data democracy” has been heralded as the answer to this rapid cycle of innovation—but it is not enough. These initiatives have noble intentions: Sharing data and information about how users interact with products widely should, in theory, help groups across the business—from marketing to IT—operate from the same source of truth to stimulate better insights and better results faster. In reality, however, data democracy fails to yield those conclusive answers and shared goals. Too much raw data is difficult and time-consuming for teams to interpret, especially as the flow of digital signals has surged, and lacks the context needed to draw conclusions about the best path forward. Instead, the data is so oppressively overwhelming to manage that departments either give up or derive inaccurate conclusions—neither of which helps drive sound decisions and productive partnerships. Rather, these conditions create a new source of frustration and inefficiency for many engineering teams: the entire organization has access to information ripe for misinterpretation, even as expectations for results grow more urgent.


Microsoft Disrupts Russian Group's Multiyear Cyber-Espionage Campaign

Microsoft said its researchers have observed Seaborgium using stolen credentials to directly log in to victims' email accounts and steal their emails and attachments. In a few instances, the threat actor has also been observed configuring victim email accounts to forward emails to attacker-controlled addresses. "There have been several cases where Seaborgium has been observed using their impersonation accounts to facilitate dialogue with specific people of interest and, as a result, were included in conversations, sometimes unwittingly, involving multiple parties," ... As far as the disruption goes, the computing giant has now disabled accounts that Seaborgium actors have been using for victim reconnaissance, phishing, and other malicious activities. This includes multiple LinkedIn accounts. It has also developed detections for phishing domains associated with Seaborgium. F-Secure, which refers to the threat actor as the Callisto Group, has been tracking its activities since 2015. In a 2017 report, the security vendor had described Callisto Group as a sophisticated actor targeting governments, journalists, and think tanks in the EU and parts of eastern Europe.


What is challenging successful DevSecOps adoption?

Although adoption is low for now, the study also confirms potential growth in the industry with 62% of respondents saying their organization is actively evaluating use cases or has plans to implement DevSecOps. “As organizations adopt modern software development processes leveraging cloud platforms, they are looking to incorporate security processes and controls into developer workflows,” said Melinda Marks, senior analyst at ESG. “This research shows DevSecOps can be a game changer for companies, and there is no doubt we will see growing market traction over the next few years.” ... Companies believe that establishing a culture of collaboration and encouraging developers to leverage security best practices are nearly equal in importance to adopting DevSecOps tools. While it is common to anticipate cultural transformation to be a roadblock prior to adoption, those practicing DevSecOps report that technical limitations, such as data capture and analysis, are actually greater barriers to success.


Lawsuit Against FTC Intensifies Location Data Privacy Battle

The alleged dispute between Kochava and the FTC also comes in the wake of an executive order by President Biden in July, following the Supreme Court Roe v. Wade ruling. Among other actions, the executive order directed the FTC to consider options "to address deceptive or fraudulent practices, including online, and protect access to accurate information" (see: Biden Order Seeks to Protect Reproductive Data Privacy). Kochava claims the government is making the company a scapegoat. "The FTC's hope was to get a small, bootstrapped company to agree to a settlement - with the effect of setting precedent across the adtech industry and using that precedent to usurp the established process of Congress creating law. Kochava disagreed with this scheme and asked the federal court in Idaho to intervene," Mariam says. Also, among other allegations, Kochava's lawsuit claims the FTC’s proposed enforcement action would overstep its legal authority related to enforcing the FTC Act. The FTC declined ISMG's request for comment on the Kochava dispute.


IT Job Market Still Strong, But Economic Headwinds Complicate Picture

David Wagner, senior research director with Computer Economics, says despite the economic headwinds, about 60% of companies surveyed in the company's latest report said they were planning to increase headcount -- the largest percentage since the 2008 recession. “We continue to think this is a sign of more IT headcount growth in the next few years,” he explains. “It comes with a small caveat, of course, which is that the economic headwinds have gotten a little stronger over the last couple of months than they were at the beginning of the year.” However, from Wagner's perspective, IT has become so strategically important to every business that particularly when it comes to IT staffing companies are going to be as positive about their staffing and their IT spending as they can be. “It's not a surprise when Google and Microsoft both announced their most recent hiring freezes right around the time they were giving their quarterly earnings,” he says. “I think what's going to happen is there's going to be a pause as companies look around and figure out how bad things are going to be.”


When it comes to changing culture, think small

To change the way people work together, Martin argues, leaders must model the behaviors they want to see. “Literally the only way that I’ve seen culture change in the 42 years since I graduated from business school is when a leader sets out to demonstrate a different kind of behavior and makes that behavior work. Other people take their cues from that behavior, and, slowly but surely, the culture changes,” he says. “Kremlin-watching does not happen only in Moscow—it’s an incredibly powerful force. People watch the leadership and do what the leadership does.” A notable aspect of this approach is that it does not require a major initiative or investment. Instead, the culture change depends on micro-interventions: small adjustments to the structure, dynamics, or framing of interpersonal interactions, applied consistently over time. Martin helped orchestrate this kind of change while working with A.G. Lafley when he was the CEO of Procter & Gamble. Lafley wanted to revamp the consumer giant’s overly bureaucratic strategic process. 


How To Do Data Governance Better

Business initiatives are built on data, and your data governance program needs to support those objectives. For example, your business goal might be better data discovery to make business reporting more easily consumed or findable. You need to understand—and embrace—how data is consumed and used. This drives the core metrics and dashboards for validating data and checking data quality. When you scope out a core purpose or goal you’re trying to achieve in the first few months or quarters, then you won’t get overwhelmed. A data domain represents the logical grouping of data, either by item or area of interest, within an organization. With these high-level categories in place, organizations can assign accountability or responsibility for their data. Decentralized consumption models make it possible for different teams to define categories differently based on domain-level knowledge. They may use different names or metrics for the same data. A shared vocabulary across all departments standardizes how data is being used and accesdata sed, increasing alignment across departments and making use and accountability easier for everyone.



Quote for the day:

"You don't lead by pointing and telling people some place to go. You lead by going to that place and making a case." -- Ken Kesey

Daily Tech Digest - August 16, 2022

What are virtual routers and how can they lead to virtual data centers?

So what can you do with virtual router technology? The number one application, according to enterprises, is virtual networking, especially SD-WAN. All virtual-network technologies build an overlay network that has its own on- and off-ramp elements, which are really access routers. While many vendors offer this technology as appliances, most will also provide virtual routers for hosting on servers. That may make sense in the data center, where there are already racks of servers installed. Using virtual routers means that if one fails because its server went down, another can be easily spun up to take its place. Virtual routers are also essential in many cloud applications. Public cloud providers are understandably unenthusiastic about your sending your techs to install routers in their data centers, but you may need a virtual router there if you want to use virtual networking and SD-WAN optimally. For this type of cloud virtual routing, make sure your virtual router is compatible with the virtual network or SD-WAN technology you’re using.


Overcoming the roadblocks to passwordless authentication

There are a variety of roadblocks associated with moving to passwordless authentication. Foremost is that people hate change. End users push back when you ask them to abandon the familiar password-based login page and go through the rigamarole of registering a factor or device required for typical passwordless flows. Further, the app owners will often resist changing them to support passwordless flows. Overcoming these obstacles can be hard and expensive. It can also be exacerbated by the need to support more than one vendor’s passwordless solution. For example, most passwordless solutions pose app-level integration challenges that require implementing SDKs to support even simple flows. What happens if you want to support more than one solution? Or use your passwordless solution as both a primary identity and authentication provider and a step-up authentication provider? Or you want to layer in behavioral analytics? There is a way to address these human and technical challenges standing in the way of passwordless adoption using orchestration. Although common in virtualized computing stacks, orchestration is a new concept in identity architectures. 


Obsolescence management for IT leaders

Obsolescence will always be a by-product of continuous technological advances. The best way to improve cyber security and reduce downtime risks is to prepare effectively and take proactive steps to manage obsolescence. With a proactive obsolescence management plan in place, such as a cloud-first approach, businesses can track the lifespan of products. This ensures that IT and operational technology are always protected, improving productivity and reducing costs. To plan for the future, mid-size businesses should carry out an assessment of current infrastructure to understand the components of the IT and operational technology landscape and how these systems interact. Vendors will often publish end-of-life dates for hardware and software at least twelve months in advance. IT managers should look at how much they already spend on maintenance and whether downtime has occurred before. Understanding the risks can also help businesses make more informed decisions about their equipment. Businesses should consider how the failure of a hardware or software component will impact operations, costs and reputation, and whether the equipment is compatible with the rest of the system.


The pitfalls of poor data management – and how to avoid them

One of the challenges is how differences in patient profile can drastically change the costs associated with the same procedure. For example, a healthy patient with no comorbidities can likely receive a colonoscopy at an outpatient center. However, a patient with a medical condition such as hemophilia would need that same colonoscopy performed in the more costly hospital setting because of the complications that could potentially arise. This variability makes providing accurate estimates complicated. One way to potentially address this issue is to provide best-case and worst-case estimates. Getting to the point where these estimates can be made in real time, so that a procedure can safely continue when a complication arises without the concern of being fined or not properly reimbursed, is key. Also, while the regulations are well-intended, the reality is it is probably unnecessary to have the specified level of price transparency for every encounter. We need to focus on the most problematic events – those medical episodes that bankrupt people because they had no idea what their out-of-pocket costs would be.


Icelandic datacentres may lead the way to green IT

One of the main application areas where Icelandic datacentres make a lot of sense is in artificial intelligence (AI). With the advancement of AI methodologies such as unsupervised machine learning, for many applications, AI training and inference now needs to occur in the same location – they need to be colocated to facilitate iteration between the two processes. Foundational AI models run for weeks or months to do a re-education, so running a full training data set is very energy intensive. Businesses that depend on AI models do training continuously to get different versions of the models. For example, they might train for a specific customer who has a data set they want trained against. ... A second type of application where Icelandic datacentres make sense is in financial services. Although trading applications require very low latency and are usually placed close to exchanges in edge or metro locations, they depend on the output of larger, more compute intensive applications. These applications use thousands of computers 24 hours a day to run Monte Carlo simulations and Markov Chain analysis to make predictions about market trends. 


Automotive hacking – the cyber risk auto insurers must consider

Cyber exposures are a relatively new frontier for auto insurance. Traditional risk considerations have revolved around liability or theft, but those have evolved amid the increasingly connected landscape for vehicles. “We must evaluate the types of losses happening and what’s causing those losses. Are they related to malfunctions in a vehicle? Are they related to hacking? It’s a challenge for insurers even to determine the ultimate cause of a loss,” said Perfetto. “If there was an accident, and it wasn’t the driver’s fault per se but more of a vehicle malfunction, that may not be easily attributed. If there was a hacking incident, that might not be easy to discover.” ... “We have seen data that supports reduction in accident frequency related to certain technology added to a vehicle. But we have also seen the cost of replacing some more advanced technologies increase. Something as simple as a rear end or a minor dent in your bumper that used to be an easy and relatively inexpensive item to fix has become much more costly,” Perfetto noted.


Are debt financings the new venture round for fintechs startups?

You have to plan ahead for venture debt. Put it in place relatively soon after an equity financing. That way there is no adverse selection for the lenders; everyone (founders, VCs and lenders) around the table is happy at that time. If you try to put something in place with less than six months of cash, you will not be able to get debt. If you put it in place after an equity round, you can draw it down way into the future — that’s called a forward commitment/drawdown. That gives the startup a lot of optionality. It’s super important to understand all the terms. Often, founders don’t realize there are things like funding MACs, investor abandonment clauses, etc. These terms can be used by the lender to block the startup from either drawing down the money or creating a default after the money has been drawn. Either way, the company is in trouble and can’t count on the capital. So you really need to know your lender, have your VCs know your lender and pay attention to your terms. This is why we created the Sample Venture Debt Term Sheet, to explain all the terms.


The cybersecurity skills gap is ‘not just about addressing headcount’

From a security perspective, I’m hoping an increase in connected systems will lead to less human-error-related cyberattacks. This will largely revolve around increasing API accessibility and integration. Not only do better integrations allow for employees to do better, more efficient work, it also enables a more secure infrastructure throughout your entire organisation. For example, when APIs are accessible throughout the application ecosystem, this allows for systems to be configured through code, helping us introduce streamlined changes to configuration rather than having to go into specific applications. From a security perspective, this enables us to do advanced things like segregation of duty and activity monitoring at scale. These benefits are a large part of why we prioritise connectivity and API accessibility at Templafy, both in our own tech stack and our platform. We know it not only benefits our own team, but also our customers.


IT leadership: Why adaptability matters

The rise of technology has incentivized industries to adapt in recent years. Still, that push is becoming a pull as realities like The Great Resignation and remote work push organizations to change how they interact with and relate to their customers and employees. The return on investment of developing adaptability in organizations comes from talent attraction and retention, increased innovation, improved employee engagement – and potentially, organizational survival. In the past, leaders have been able to draw from models such as William Bridges’ Transitions to understand adaptability. But while these approaches may help us to understand how a person adapts and what behaviors leaders should expect as people move through change, few have explored the why. And without that knowledge, it can be challenging for leaders to create supportive, psychologically healthy workplaces that support people as they adapt. Because adapt they must. The key to unlocking the potential of emotional intelligence is first to understand the construct and then identify the areas for development. The same goes for AQ. 


Developer Experience vs. User Experience

Retaining developers requires more than first impressions. Just as good UX needs to be evaluated, refined, and tested over time, good DX is an investment in the long term. You won’t know how well you’ve succeeded without using analytics to evaluate your DX and test changes. Monitoring your API helps you identify users who have not been able to successfully make API calls, find patterns of success and failure for developers, and see how different users are engaging with your product over time. While tracking UX metrics is relatively straightforward for products focused on end-users, DX metrics differ in important ways. You need to develop a good strategy for API analytics so that you track relevant business value metrics while avoiding vanity metrics. ... You need to understand DX when you build products for developers so that you can attract developer users, inspire their confidence and creativity, and support their increasingly complex integrations over time. Building good UX and DX can be challenging, but with the right analytics stack, you can monitor your API and use metrics to craft the perfect API developer experience.



Quote for the day:

"Taking charge of your own learning is a part of taking charge of your life, which is the sine qua non in becoming an integrated person." -- Warren G. Bennis

Daily Tech Digest - August 15, 2022

How critical infrastructure operators can secure OT data

OT data is foundational to critical areas of operations – a breach to OT systems can risk core business process operations and expose critical data. There is still some maturity required among organisations in prioritising backup and data protection as part of their organisation’s security posture and planned response to a cyber attack. Based on research we did in April 2022 across the UK, US and Australia of over 2,000 IT decision-makers and SecOps professionals, only 54% of IT decision-makers said backup and data protection was a top priority and a crucial capability, while only 38% of SecOps respondents said the same. Many organisations focus on “protect controls” to reduce the likelihood of a breach, but they also need to look at security controls that limit the impact of a breach. This means ensuring your recovery capabilities can meet aggressive recovery time and point objectives, so that you can resume business operations while minimising the impact of a ransomware attack.


Uber Open-Sourced Its Highly Scalable and Reliable Shuffle as a Service for Apache Spark

Spark is shuffling data on local machines by default. It causes challenges while the scale is getting very large (about 10,000 nodes on Uber Scale). At this scale of operation, major reliability and scalability problems happen. One main challenging area in using Spark at Uber scale is system reliability. Machines are generating terabytes of data to shuffle every day. This causes disk SSDs to wear out faster while they are not designed and optimized for high IO workloads. SSDs are designed to work generally for 3 years but in heavy Spark shuffling operations, they are working for about 6 months. Also, lots of failures happen for shuffling operations which decreases system reliability. The other challenge in this area is scalability. Applications could produce lots of data that could not be fitted on a single machine. It causes a full disk exception problem. ... To resolve the mentioned issues, engineers at Uber architected and designed Remote Shuffle Service (RSS) as shown in the following diagrams. It solves the mentioned reliability and scalability problems in the common Spark shuffling operation.


SMS-Based Multi-Factor Authentication: What Could Go Wrong? Plenty

“We call it smishmash because it’s a mashup of techniques,” explains Olofsson. “SMS for two-factor authentication [2FA] is broken. This is not news; it’s been broken since the inception. It was never intended for this use. We’ve been spoofing text messages since as long as we’ve been hacking. It’s just that now we’re seeing weaponization.” Text messages have a higher implicit trust than email scams, and hence a higher success rate, he notes. Olofsson reviewed several newsworthy breaches involving smishing and 2FA, including a major theft of NFTs from OpenSea. “We see a huge increase in the number of smishing attacks,” he says. “How many of you have got an unsolicited text in the last week? Your phone numbers are increasingly being leaked.” "What we have done [is combine] a search of the clear-net and darknet to create a huge database," says Byström. "Doing this research, we got so much spam,” adds Olofsson. "Even ‘do you want to buy the Black Hat attendee list?’ We got the price down below $100."


Sloppy Use of Machine Learning Is Causing a ‘Reproducibility Crisis’ in Science

Kapoor and Narayanan warn that AI’s impact on scientific research has been less than stellar in many instances. When the pair surveyed areas of science where machine learning was applied, they found that other researchers had identified errors in 329 studies that relied on machine learning, across a range of fields. Kapoor says that many researchers are rushing to use machine learning without a comprehensive understanding of its techniques and their limitations. Dabbling with the technology has become much easier, in part because the tech industry has rushed to offer AI tools and tutorials designed to lure newcomers, often with the goal of promoting cloud platforms and services. “The idea that you can take a four-hour online course and then use machine learning in your scientific research has become so overblown,” Kapoor says. “People have not stopped to think about where things can potentially go wrong.” Excitement around AI’s potential has prompted some scientists to bet heavily on its use in research. Tonio Buonassisi, a professor at MIT who researches novel solar cells, uses AI extensively to explore novel materials. 


Why edge is eating the world

The edge is a distributed system. And when dealing with data in a distributed system, the laws of the CAP theorem apply. The idea is that you will need to make tradeoffs if you want your data to be strongly consistent. In other words, when new data is written, you never want to see older data anymore. Such a strong consistency in a global setup is only possible if the different parts of the distributed system are joined in consensus on what just happened, at least once. That means that if you have a globally distributed database, it will still need at least one message sent to all other data centers around the world, which introduces inevitable latency. Even FaunaDB, a brilliant new SQL database, can’t get around this fact. Honestly, there’s no such thing as a free lunch: if you want strong consistency, you’ll need to accept that it includes a certain latency overhead. Now you might ask, “But do we always need strong consistency?” The answer is: it depends. There are many applications for which strong consistency is not necessary to function. One of them is, for example, this petite online shop you might have heard of: Amazon.


How To Protect Yourself With A More Secure Kind Of Multi-Factor Authentication

According to the Cybersecurity and Infrastructure Security Agency, “Multi-factor authentication is a layered approach to securing data and applications where a system requires a user to present a combination of two or more credentials to verify a user’s identity for login.” When we log into an online account, we’re often aiming to thwart an attacker or hacker using extra layers of verification — or locks. ... First, let’s talk about the marketing of MFA. If your MFA provider touts itself as unhackable or 99% unhackable, they are spouting multi-factor B.S. and you should find another provider. All MFA is hackable. The goal is to have a less hackable, more phishing resistant, more resilient MFA. Registering a phone number leaves the MFA vulnerable to SIM-swapping. If your MFA does not have a good backup mechanism, then that MFA option is vulnerable to loss. ... Multi-factor authentication is more securely accomplished with an authenticator app, smart card or hardware key, like a Yubikey. So if you have an app-based or hardware MFA, you’re good, right? Well, no. 


Met Police ramps up facial recognition despite ongoing concerns

Russell acknowledges that there are exceptional circumstances in which LFR could be reasonably deployed – for instance, under the threat of an imminent terrorist attack – but says the technology is ripe for abuse, especially in the context of poor governance combining with concerns over the MPS’s internal culture raised by the policing inspectorate, which made the “unprecedented” decision to place the force on “special measures” in June 2022 over a litany of systemic failings. “While there are many police officers who have public service rippled through them, we have also seen over these last months and years of revelations about what’s been going on in the Met, that there are officers who are racist, who have been behaving in ways that are completely inappropriate, with images [and] WhatsApp messages being shared that are racist, misogynist, sexist and homophobic,” she said, adding that the prevalence of such officers continuing to operate unidentified adds to the risks of the technology being abused when it is deployed.


Many ZTNA, MFA Tools Offer Little Protection Against Cookie Session Hijacking Attacks

The researchers recently examined technologies from Okta, Slack, Monday, GitHub, and dozens of other companies to see what protection they offered against attackers using stolen session cookies to take over accounts, impersonate legitimate users, and move laterally in compromised environments. ... Okta described such attacks as an issue for which it was not directly responsible. "As a web application, Okta relies on the security of the browser and operating system environment to protect against endpoint attacks such as malicious browser plugins or cookie stealing," Mesh quoted Okta as saying. Most of the other vendors that Mesh contacted about the issue similarly distanced themselves from any responsibility for cookie theft, reuse, and session-hijacking attacks, says Netanel Azoulay, co-founder and CEO of Mesh Security. "We believe that this issue is the complete responsibility of the vendors on our list — including IdP and ZTNA solutions," Azoulay insists. 


Edge computing: 4 pillars for CIOs and IT leaders

By definition, edge computing sort of takes the notion of a centralized IT network environment and shatters it into hundreds or even thousands (or more) of smaller environments. Picture the classic image of a room full of servers, but now every server on every rack sits in its own room – or in many cases no room at all, but on an oil rig or manufacturing floor or cell tower. Almost regardless of your edge use cases, it’s going to entail moving lots of the stuff that has long been the domain of IT – infrastructure/compute, devices, applications, data – away from your IT environment, however that’s currently defined. Properly managing all of that stuff requires some forethought. “You’re probably going to have a lot of devices out on the edge and there probably isn’t much in the way of local IT staff there,” says Gordon Haff, technology evangelist, Red Hat. “So automation and management are essential for tasks like mass configuration, taking actions in response to events, and centralized application updates.”


CIOs Turn to the Cloud as Tech Budgets Come Under Scrutiny

Although investment in cloud tech is booming, CIOs should also be keeping a critical eye on managing cloud costs, which can quickly spiral out of control. To ensure that cloud costs are properly controlled, it is important for CIOs to have tools that enable them to tightly monitor and act on unused resources -- there are no cost benefits if these idle resources remain on the cloud balance sheet. JupiterOne CISO Sounil Yu says the engineering team should shut down these resources soon after they become idle and rebuild the resources through automation when they are needed again. “CIOs should enforce this routine because in addition to reducing costs, it improves the overall resiliency of the organization to unexpected failures since it forces engineers to practice rebuilding regularly,” he says. Dennis Monner, chief commercial officer at Aryaka, agrees cloud investment is going up, and points out there are two parts of this. “First, CIOs need to understand their true cloud costs versus bringing it back in-house, which also introduces risk and expenses,” he said. “This needs to be a true apples-to-apples comparison.”



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry

Daily Tech Digest - August 14, 2022

Identity crisis: Artificial intelligence and the flawed logic of ‘mind uploading’

We can think of the copy as a digital clone or twin, but it would not be you. It would be a mental copy of you, including all of your memories up to the moment your brain was scanned. But from that time on, the copy would generate its own memories inside whatever simulated world it was installed in. It might interact with other simulated people, learning new things and having new experiences. Or maybe it would interact with the physical world through robotic interfaces. At the same time, the biological you would be generating new memories and skills and knowledge. In other words, your biological mind and your digital copy would immediately begin to diverge. They would be identical for one instant and then grow apart. Your skills and abilities would diverge. Your knowledge and understanding would diverge. Your personality and objectives would diverge. After a few years, there would be significant differences. And yet, both versions would “feel like the real you.” This is a critical point – the copy would have the same feelings of individuality that you have. 


It’s Time to Normalize Cyberattack Data

The hope is that as an open standard, it will be adopted and used with existing security standards and processes. Then, as developers and users incorporate OCSF into their products and processes, security data normalization will become simpler and less burdensome. This, in turn, will enable security teams to do better at analyzing attack data, identifying threats, and defending their organizations from cyberattacks. Ultimately, John Graham-Cumming, Cloudflare’s CTO, said in a statement, “Every business deserves a simple, straightforward way to analyze and understand the security landscape — and that starts with their data. By participating in the OCSF, we hope to help the entire security industry focus on doing the work that matters instead of wasting countless hours and resources on formatting data.” I hope this is true. I hate wasting time. And time is one thing we never have enough of when we’re dealing with a security problem. If OSCF can succeed in its aims, it will be a major step forward in dealing with large-scale security problems.


3 Expert-Backed Strategies for Boosting Your Entrepreneurial Energy

Entrepreneurs are a special breed of overthinkers. We're constantly making decisions, so we have to think fast on our feet. But we also must take the time to weigh our options out properly. And so we think up all possible scenarios: the good, the bad and the ugly. This used to be one of my biggest hurdles when starting. What if this client falls through? What if users aren't satisfied with our product? What if we can't attract enough attention and be sustainable? What will I do? My mind was my biggest enemy. Consequently, after a long night of tossing and turning, I'd wake up unmotivated to start the day. Here's the thing I've learned since: energy thrives on confidence. And confidence only comes when you believe in your abilities. As co-authors Linda Bloom, L.C.S.W., and Charlie Bloom, M.S.W.. write in Psychology Today, "Self-trust is not trusting yourself to know all the answers, nor is it believing that you will always do the right things," they explain. "It's having the conviction that you will be kind and respectful to yourself regardless of the outcome of your efforts."


4 Flaws, Other Weaknesses Undermine Cisco ASA Firewalls

"If you have access to the virtual machine, you have full access inside the network, but more importantly, you can sniff all the traffic going through, including decrypted VPN traffic," Baines says. "So, it is a really great place for an attacker to chill out and pivot, but probably just sniff for credentials or monitor the traffic flowing into the network." Baines discovered the issue when he was investigating the Cisco ASDM to get "a level set on how the GUI (graphical user interface) works" and pull apart the protocol, he says. A component installed on administrators' systems, known as the ASDM launcher, could be used by attackers to deliver malicious code in Java class files or through the ASDM Web portal. As a result, attackers could create a malicious ASDM package to compromise the administrator's system through installers, malicious Web pages, and malicious Java components. The ASDM vulnerabilities discovered by Rapid7 include a known vulnerability (CVE-2021-1585) that allows an unauthenticated remote code execution (RCE) attack, which Cisco claimed was patched in a recent update, but Baines discovered it remained.


A Shift in Computer Vision Is Coming

Is computer vision about to reinvent itself, again? Ryad Benosman, professor of ophthalmology at the University of Pittsburgh and an adjunct professor at the CMU Robotics Institute, believes that it is. As one of the founding fathers of event-based vision technologies, Benosman expects that neuromorphic vision — computer vision based on event-based cameras — will be the next direction computer vision will take. “Computer vision has been reinvented many, many times,” Benosman said. “I’ve seen it reinvented twice at least, from scratch, from zero.” Benosman cited the shift in the 1990s from image processing with a bit of photogrammetry to a geometry-based approach and then to today’s rapid advance toward machine learning. Despite those changes, modern computer-vision technologies are still predominantly based on image sensors — cameras that produce an image similar to what the human eye sees. According to Benosman, until the image-sensing paradigm is no longer useful, it holds back innovation in alternative technologies. The development of high-performance processors, such as GPUs, delay the need to look for alternative solutions and thus have prolonged this effect.


What’s the Go programming language really good for?

Go has been compared to scripting languages like Python in its ability to satisfy many common programming needs. Some of this functionality is built into the language itself, such as “goroutines” for concurrency and threadlike behavior, while additional capabilities are available in Go standard library packages, like Go’s http package. Like Python, Go provides automatic memory management capabilities including garbage collection. Unlike scripting languages such as Python, Go code compiles to a fast-running native binary. And unlike C or C++, Go compiles extremely fast—fast enough to make working with Go feel more like working with a scripting language than a compiled language. Further, the Go build system is less complex than those of other compiled languages. It takes few steps and little bookkeeping to build and run a Go project. ... Go binaries run more slowly than their C counterparts, but the difference in speed is negligible for most applications. Go performance is as good as C for the vast majority of work, and generally much faster than other languages known for speed of development.


Ex-CIA security boss predicts coming crackdown on spyware

Protecting individuals' privacy is something all of us — including elected officials — should be very concerned about, Mestrovich said. "I would expect, going forward, there will be either executive orders or legislation passed to ensure that the civil liberties and the rights that we all expect to data privacy and privacy of our own activities are kept sacrosanct," he added. As a CISO himself, ransomware is top of mind. "Ransomware is a huge threat to just our economic viability," Mestrovich told us, citing a Cybersecurity Ventures forecast that global cybercrime costs to grow by 15 percent per year over the next five years, reaching $10.5 trillion annually by 2025. "Clearly, the cyber criminals have monetized the theft of data or depriving an organization use of its data," Mestrovich said. "Until we can do something to prevent the economic gain that they have from the theft of data or the denial of an organization's access to his data. This is only going to increase"


Urgent security warning issued as hackers shift ransomware attacks to small businesses

The Director of the NCSC Richard Browne said that in the past these groups typically focussed on larger organisations. However they have now shifted focus to smaller entities. “We have been dealing with the threat of ransomware for some time; however, we have seen a noticeable change in the tactics of criminal ransomware groups, whereby rather than largely focussing on Governments, critical infrastructure and big business, they are increasingly targeting smaller businesses. “This is a trend that has been observed globally, and Ireland is no exception with several businesses becoming victims of these groups in the past number of weeks,” he said. Richard Browne said the letter sent to IBEC by the NCSC and GNCCB has outlined guidance for small companies and how they can deal with the attack. “Whilst we appreciate that many business owners are understandably nervous of the threat ransomware poses, there are some straightforward security measures that can be put in place to ensure that an organisations data and systems remain secure,” he added.


Computer Vision and Deep Learning for Agriculture

AI applications can analyze weather and soil conditions, water usage, and risk of diseases to help farmers reduce the risk of crop failures by providing valuable insights like the right time to sow seeds, right crop/seed choices. Detecting plant diseases, weeds, and pests beforehand can reduce the use of chemicals like herbicides and pesticides and bring cost savings. Many companies have started using robots that can eliminate 80% of the volume of the substances generally sprayed on the crops and bring down the expenditure on herbicides by 90% Further, the use of AI in harvesting, picking, and vacuum apparatus can quickly identify the location of the harvestable produce and help determine the proper fruits. The Strawberry Harvest is a classic example. ... With satellite imagery and weather data, AI applications can analyze the market trends, like which crops are in demand and which are more profitable. This helps the farmers to increase their revenue by guiding them about future price patterns, demand level, type of crop to sow for maximum benefit, pesticide usage, etc.


Rethinking Web Application Firewalls

The vulnerabilities are so numerous now and cloud native applications have larger attack surfaces with no way to mitigate vulnerabilities using traditional means, Tiperneni explained. “It’s no longer sufficient to throw out a report that tells you about all the vulnerabilities in your system,” Tiperneni said. “Because that report is not actionable. People operating the services are discovering that the amount of time and effort it takes to remediate all these vulnerabilities is incredible, right? So they’re looking for some level of prioritization in terms of where to start.” And the onus is on the user to mitigate the problem, Tiperneni said. Those customers have to think about the blast radius of the vulnerability and its context in the system. The second part: How to manage the attack surface. In this world of cloud native applications, customers are discovering very quickly, that trying to protect every single thing, when everything has access to everything else, is an almost impossible task, Tiperneni said.



Quote for the day:

"The Leadership Seduction of storytelling invites self-pity, exaggerates one's importance, and encourages inaction." -- Catherine Robinson-Walker

Daily Tech Digest - August 13, 2022

CEOs need to start caring about the cybersecurity talent gap crisis, new report shows

The focus on cybersecurity needs to start in the boardroom, Morgan argues. CEOs at every Fortune 500 company and midsize to large organization should advocate to have those with cybersecurity experience on their board, he says. “That could be the [chief information security officer (CISO)] or an outside executive with real-world cybersecurity experience,” he says. “Do it now to protect your organization, not after a breach or hack to protect your reputation.” By 2025, 35% of Fortune 500 companies will have board members with cybersecurity experience, according to the Cybersecurity Ventures report, and by 2031 that will climb to more than 50%. By comparison, last year just 17% of Fortune 500 companies had board members with this type of background. The thought is that if cybersecurity is a regular boardroom discussion, then the importance of it will trickle down to the rest of the organization, Morgan says, becoming a part of the company’s DNA. He encourages executives to take cybersecurity as seriously as profit and loss discussions.


5 elements of a successful digital platform

“Data is everything for us,” Rotenberg said. Making sure you have high quality data and that you can constantly iterate on it and improve it should be a priority when building a platform. “That’s something that we spend a lot of time on because it’s such an important foundation,” she said. One way the company uses it is to personalize the experience for clients. For example, this might mean using digital credentials. It may sound simple, but having the right mobile phone number means that Fidelity can interact with clients in the way they want. “Sometimes it’s the most basic things that actually make the biggest difference,” she said. ... There are a lot of different ways that fintechs and Fidelity could work with or against each other. “A fintech could be our competitor, our vendor, [or] we could be a client as well, and vice versa,” she said. Successful fintechs, in particular, usually have gotten something right in understanding a “customer friction” that other firms haven’t figured out. “They go deep in understanding the friction, they create success, and then they scale outward,” Rotenberg said. 


Top cybersecurity products unveiled at Black Hat 2022

Software composition analysis (SCA), static application security testing (SAST), and container scanning are the latest capabilities in the new update to the Cycode supply chain security management platform. All new components will add to Cycode’s knowledge graph, which structures and correlates data from the tools and phases of the software development life cycle to allow programmers and security professionals to understand risks and coordinate responses to threats. A key function of the knowledge graph includes the ability to coordinate security tools on the platform to do tasks such as identifying when leaked code contains secrets like API keys or passwords, in order to reduce risk. Support for vulnerability detection and protection across runtime environments including Java Virtual Machine (JVM), Node.js, and .NET CLR, has been added to the Application Security Module in the Dynatrace software and infrastructure monitoring platform. Additionally, Dynatrace has extended its support to applications running in Go, a fast-growing, open-source programming language developed at Google.


Google Cloud and Apollo24|7: Building Clinical Decision Support System (CDSS) together

For any health organization that wants to build a CDSS system, one key block is to locate and extract the medical entities that are present in the clinical notes, medical journals, discharge summaries, etc. Along with entity extraction, the other key components of the CDSS system are capturing the temporal relationships, subjects, and certainty assessments. ... The advantage of AutoML Entity Extraction is that it gives the option to train on a new dataset. However, one of the prerequisites to keep in mind is that it needs a little pre-processing to capture the input data in the required JSONL format. Since this is an AutoML model just for Entity Extraction, it does not extract relationships, certainty assessments, etc. ... The major advantage of these BERT-based models is that they can be finetuned on any Entity Recognition task with minimal efforts. However, since this is a custom approach, it requires some technical expertise. Additionally, it does not extract relationships, certainty assessments, etc. This is one of the main limitations of using BERT-based models.


In a hybrid workforce world, what happens to all that office space?

Amy Loomis, a research director for IDC's worldwide Future of Work market research service, said her research isn't showing an overall reduction in square footage, but said more companies my be subleasing unused space or reconfiguring it to better suit hybrid work. The key phrase is "space optimization," which is being done to attract new employees and for environmental sustainability. In North America, 34% of companies surveyed by IDC said that was a key driver in real estate investments. “What we’re seeing is repurposing of office space,” Loomis said. “Organizations are investing in office spaces and making them as dynamic, reconfigurable, and sustainable as possible. "So, yes they left that building during the pandemic and predominantly went remote and hybrid, but as people are going forward into the new office space, it’s more likely to be multi-purpose, multifunction, multi-tenant,” Loomis added. Many real estate developers now see the value in repurposing spaces to include not only room for commercial use, but also space for retail and even residential housing.


6 Myths About the Cloud That You Should Stop Believing

Cloud migration is an enticing prospect, but you’ve probably heard what happens when you have too much of a good thing. Going the cloud route and cloud data integration doesn’t have to mean dumping your entire business at once. Despite the recognized short and long-term benefits, the expense alone would be too daunting a concept for many. Cloud migration can take many forms. Implementing a hybrid approach to cloud technology is considerably more common, with many people starting with a particular area or application (such as email) and working their way up. ... True, virtualization is a vital technology for cloud computing, but virtualization doesn’t equally cloud computing. While virtualization is mainly concerned with workload and server consolidation to reduce infrastructure costs, Hadoop in cloud computing encompasses much more. Consider that, according to an IOUG (Independent Oracle User Group) study of its members, cloud clients are embracing Platform as a Service faster than Infrastructure as a Service.


Department of Health investigates bias in medical devices and algorithms

As part of an independent review on equity in medical devices, led by Margaret Whitehead, WH Duncan chair of public health in the Department of Public Health and Policy, the government is seeking to tackle disparities in healthcare by gathering evidence on how medical devices and technologies may be biased against patients of different ethnicities, genders and other socio-demographic groups. For instance, some devices employing infrared light or imaging may not perform as well on patients with darker skin pigmentation, which has not been accounted for in the development and testing of the devices. Experts are being asked to provide as much information as possible about biases in medical devices. Along with information about the device type, name, brand or manufacturer, the independent review is also looking to gather as much detail as possible about the intended use of medical devices that may be discriminatory, the patient population on which they are used, and how and why these devices may not be equally effective or safe for all the intended patient groups.


Event-Driven Architectures & the Security Implications

It’s never easy to crush a rock, but it is far from impossible. Taking an existing application from traditional architecture to EDA requires extensive resources and development time. Also, while building something new can be exciting, reworking the old may be unstimulating, especially when it still seems functional. This can sometimes result in postponing such a drastic transition. However, this transformation can be quite enlightening—both from a technical and an operational viewpoint. Developers perceive EDA to be inherently complex, especially for businesses with intricate processes. There is the concern that EDA does not effectively capture critical aspects of a company and that monitoring and debugging the system is more challenging because of the lack of a centralized structure. However, this complexity does not simply disappear by opting for a different architecture. Monitoring and debugging are easier with suitable tracing tools that are tailor-made for distributed systems, proper encapsulation of individual services, and an in-depth understanding of the functions of individual services and the events that should trigger them. 


Composing the future of banks

The biggest challenge for any bank is how do they reach such a vision of composable banking when over decades of investment in technology automation they have hundreds or thousands of systems, with some sharing data through extraction, some integrated through technical bridges and maybe a few more modern solutions through APIs? Integration is one of the biggest headaches a bank has, so the idea of composable banking would be simpler if every system had APIs, but that just isn’t the real world. In addition to this, not every process is based around system-to-system interaction. There are processes that require human intervention, often managed by business process automation software. Sometimes these processes are necessary because systems integration may not be possible without them: the swivel chair problem of keying data from one system into another. In the last few years, artificial intelligence (AI) has been added to the mix to make the routing of flows smarter. As always, technologists are great at solving individual processes, but business tends to be more complex, and it is only much later we start to see a bigger picture.


How to Hire the Best AI & Machine Learning Consultants

AI and machine learning consultants consist of qualified and experienced AI designers, developers, and other experts that help design, implement, and integrate AI solutions into the company’s business environment. They can provide, develop, and advise on a wide range of AI capabilities like predictive analytics, data science, natural language processing (NLP), computer vision, process automation, voice-enabled technology, and much more. These consultants can evaluate the potential of data, software infrastructure, and technology to effectively deploy AI systems and workflows. When bringing on the best AI and machine learning consultants, you should look for specialists that go beyond just data science. Most AI and machine learning projects involve far more than data science. For example, they involve engineering and aggregating data and formatting it to teach an AI system. These types of projects also often involve hardware, wireless, and networking, meaning the consultant should be an expert in the cloud and the Internet of Things (IoT).



Quote for the day:

"The great leaders are like best conductors. They reach beyond the notes to reach the magic in the players." -- Blaine Lee