Daily Tech Digest - October 31, 2020

Six frightening data stories that will give you nightmares

Acarophobia is a fear of tiny, crawling, parasitic insects; apiphobia is a fear of bees; and arachnophobia, a fear of spiders. But what is the term for a phobia of those beastly bugs that can bring down an entire server? This happened at a London advertising agency! The creative team had an important customer deadline to meet and they could no longer access their critical Adobe illustrator data and other large creative files. The disaster recovery plan would take two days to restore the data. One day after the job deadline. The clock was ticking… The problem was the recovery time objective (RTO) set up years ago, and because the longer the RTO, the lower the price, this firm thought a shorter RTO wasn’t worth it. But don’t be fooled when it comes to protecting your business-critical data, for there’s always a price to pay… You have friends coming for a Halloween party and arrive home from the supermarket, bags full of decorations, drinks, and ice, only to find that you don’t have your house key. No doubt workers who had planned to work on some company files only to realise they cannot access them when working from home feel the same way, especially during this Covid-19 pandemic. Users may be completely locked out of their data files, but more often, they face a tedious and clunky experience to access those files.


Honeywell introduces quantum computing as a service with subscription offering

The H1 has been up and running for several months internally at Honeywell, but has been in use by customers for about three weeks, said Uttley. Honeywell has been working with eight enterprise customers, including DHL, Merck, and JP Morgan Chase. Some of those customers had been working on the H0 system and were able to easily "port over" work to the new machine, said Uttley. One reason for the subscription is that there is still substantial hand-holding that happens. Those windows of time include participation with the customer by Honeywell quantum theorists, and Honeywell operations teams, who work "hand in hand" with customers. The hands-on approach of Honeywell to customer subscriptions makes sense given that much of the work that customers will be doing initially is to gain a sense of trust, said Uttley. They will be seeing what results they get from the quantum computer and matching those to the same work on a classical computer, to validate that the quantum system produces correct output. On top of the blocks of dedicated time, each subscriber can get queueing time, said Uttley, where jobs are processed as capacity is available.


JPM Coin debut marks start of blockchain’s value-driven adoption cycle

In a recent interview, JP Morgan’s global head of wholesale payments stated that the launch of JPM Coin as well as certain other “behind the scenes moves” prompted the banking giant to create a new business outfit called Onyx. The unit will allow the company to spur its focus on its various ongoing blockchain and digital currency efforts. Onyx reportedly has more than 100 staff members and has been established with the goal of commercializing JP Morgan’s various envisioned blockchain and crypto projects, moving existing ideas from their research and development phase to something more tangible. When asked about their future plans and if crypto factors majorly into the company’s upcoming scheme of things, a media relations representative for J.P. Morgan told Cointelegraph that there are no additional announcements on top of what was already unveiled recently. Lastly, on Oct. 28, the bank announced that it was going to rebrand its blockchain-based Interbank Information Network, or IIN, to “Liink” as well as introduce two new applications — Confirm and Format — that have been developed for specific purposes of account validation and fraud elimination for its clients. Liink will be a part of the Onyx ecosystem and will enable participants to collaborate with one another in a seamless fashion.


What is DevOps? with Donovan Brown

“DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.” – Donovan Brown. Why we do “DevOps” comes down to that one big word Donovan highlights… value. Our customers want the services we provide to them to always be available, to be reliable, and to let them know if something is wrong. These are the same expectations we should all take when working together to deliver the application or service our end user will experience. By producing an environment that values a common goal amongst our team, we can see greater productivity and success for our users. Donovan Brown opens the “Deliver DevOps” event at the Microsoft Reactor in the UK by taking us for a lap around Azure DevOps concluding with the announcement of a new UK geo to store your Azure DevOps data. What is DevOps? This is a question that seems to be constantly debated. Is it automation? Is it culture? Is DevOps a team? Is DevOps a philosophy? All great things to ask. By looking to DevOps, teams are able to provide the most value for their customers. In this video, Donovan Brown, Principal DevOps Manager at Microsoft, gives us what the Microsoft definition of DevOps in just a few minutes.


Driving remote workforce efficiency with IoT security

As with all cybersecurity issues, no “one size fits all” approach to IoT security exists. At the core, the IoTSCF provides guidance across compliance classes. However, it does set some specific minimum requirements for all IoT devices. Among these security controls, the IoTSCF suggests: Having an internal organizational member who owns and is responsible for monitoring the security; Ensuring that this person adheres to the compliance checklist process; Establishing a policy for interacting with internal and third-party security researchers; Establishing processes for briefing senior executives in the event the IoT device leads to a security incident; Ensuring a secure notification process for notifying partners/users; and Incorporating IoT and IoT-based security events as part of the Security Policy. From a hardware and software perspective, the following suggestions guide all compliance classes: Ensuring the product’s processor system has an irrevocable hardware Secure Boot process; Enable the Secure Boot process by default; Ensure the product prevents the ability to load unauthenticated software and files; Ensure that devices supporting remote software updates incorporate the ability to digitally sign software images ...


Why 2021 will be the year of low-code

Low-code will make it to the mainstream in 2021 with 75% of development shops adopting this platform, according to Forrester's 2021 predictions for software development. This shift is due in part to the new working environment and product demands caused by the COVID-19 crisis. Forrester analysts found that "enterprises that embraced low-code platforms, digital process automation, and collaborative work management reacted faster and more effectively than firms relying only on traditional development." ... Forrester analysts also noted the importance of adjusting communication habits and workflows in the new year. The report notes that teams that had already invested in high-trust culture, agile practices, and cloud platforms found it easier to adapt to 100% remote work. Teams that relied on a command-and-control approach to work and older platforms struggled to adjust to this new environment. ... This will require sustained attention and active management to make this happen: "Keeping developers out of endless virtual meetings while maintaining governance will particularly challenge organizations in regulated industries, and they will embrace value stream management as a way of maintaining data-informed insights and collecting process metrics that enable compliance and governance at scale."


Artificial Intelligence Is Modernizing Restaurant Industry

While technology is growing and benefiting many industries, certain industries are still struggling to survive. One such industry experiencing the battle of endurance amidst its peers is the restaurant business. 52% of the restaurant proprietors have consented to the fact that high operating and food costs appear to be the top difficulties that come their way while dealing with their business. Restaurants can undoubtedly keep steady over everything by the legitimate implementation of technology into their business. One such technology which is said to have some critical impact on this industry specialty is artificial intelligence. Almost certainly, there are various advantages of implementing artificial intelligence in restaurants like improved customer experience, more sales, less food wastage, and so forth. ... The climate is an important factor in restaurant sales. Studies show that 7 out of 10 restaurants state that weather forecasts affect their sales. Perhaps it’s bright and an ideal day to enjoy a sangria on a yard with friends, or possibly it’s cold and desolate outside and you feel like having hot cocoa at a cozy bistro. Regardless of whether it’s bright, shady, rainy, snowy or hotter than expected, customers are attracted to specific foods and beverages dependent on the conditions outside.


Flipping the Odds of Digital Transformation Success

The technology is important, but the people dimension (organization, operating model, processes, and culture) is usually the determining factor. Organizational inertia from deeply rooted behaviors is a big impediment. Failure should not be an option, and yet it is the most common result. The consequences in terms of investments of money, organizational effort, and elapsed time are massive. Digital laggards fall behind in customer engagement, process efficiency, and innovation. In contrast, companies that are successful in mastering digital technologies, establishing a digital mindset, and implementing digital ways of working can reach a new rhythm of continuous improvement. Digital, paradoxically, is not a binary state, but one of ongoing innovation as new waves of disruptive technologies are released to the market. Consider, for example, artificial intelligence, blockchain, the Internet of Things, spatial computing, and, in time, quantum computing. Unsuccessful companies will find it extremely hard to leverage these advances, while digital organizations will be innovating faster and pulling further away from digital laggards—heading for that bionic future. Digital transformations can define careers as well as companies.


SREs: Stop Asking Your Product Managers for SLOs

One of the fundamental premises of software reliability engineering is that you should base your reliability goals—i.e., your service level objectives (SLOs)—on the level of service that keeps your customers happy. The problem is, defining what makes your customers happy requires communication between software reliability engineers (SREs) and product managers (PMs) (aka business stakeholders), and that can be a challenge. Let’s just say that SREs and PMs have different goals and speak slightly different languages. It’s not that PMs fail to appreciate the value that SREs bring to the table. Today, in the era of software as a service, features such as security, reliability and data privacy are respected as critical features of the service-product a SaaS company delivers. Modern application users and customers of software services care a lot about data privacy, cybersecurity and uptime; therefore, PMs care, too. In fact, it’s not uncommon to see these features touted prominently on a company’s website because the folks in marketing know that customers are making purchasing decisions based on whether the company can deliver reliability, speed, security and performance quality. So, yes, PMs do care.


How to improve the developer experience

Developers come into a software project motivated, but it doesn't take long for that energy to get sapped. "[Onboarding] is where I feel most developers lose their initial spurt of motivation," said Chris Hill, senior manager of software development at T-Mobile. An inherited software project comes with immediate barriers to productivity, such as lacking or obscure documentation and the time a developer wastes waiting for access to the code repository and dev environment. Once work begins, the developer must grasp what the code means, how it delivers value and all the tools that are part of the dev cycle. "Every [inherited project] feels like I stepped in the middle of an IKEA build cycle, and all the parts are missing, and there are no instructions, and there's no support line, and all the screws are stripped, and I have pressure that I should come out with my first feature next week," Hill said. At T-Mobile, Hill prioritizes developer experience, which is comparable to user experience but specific to developers' work. A positive developer experience is one in which programmers can easily access the tools or resources they need and apply their expertise without unnecessary constraints.



Quote for the day:

"Remember teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni

Daily Tech Digest - October 30, 2020

The future of IoT: 5 major predictions for 2021

Certainly COVID-19 continues to globally plague, and the research predicts that connected device makers will double efforts for healthcare. But COVID-19 forced many of those who were ill to stay at home or delay necessary care. This has left chronic conditions unmanaged, cancers undetected, and preventable conditions unnoticed.  "The financial implications of this loom large for consumers, health insurers, healthcare providers, and employers." Forrester's report stated. There will be a surge in interactive and proactive engagement such as wearables and sensors, which can detect a patient's health while they are at home. Post-COVID-19 healthcare will be dominated by digital-health experiences and will improve the effectiveness of virtual care. The convenience of at-home monitoring will spur consumers' appreciation and interest in digital health devices as they gain greater insight into their health. Digital health device prices will become more consumer friendly. The Digital Health Center of Excellence, established by the FDA, is foundational for the advancement and acceptance of digital health. A connected health-device strategy devised by healthcare insurers will tap into data to improve understanding of patient health, personalization, and healthcare outcomes.


A Bazar start: How one hospital thwarted a Ryuk ransomware outbreak

We’ve been following all the recent reporting and tweets about hospitals being attacked by Ryuk ransomware. But Ryuk isn’t new to us… we’ve been tracking it for years. More important than just looking at Ryuk ransomware itself, though, is looking at the operators behind it and their tactics, techniques, and procedures (TTPs)—especially those used before they encrypt any data. The operators of Ryuk ransomware are known by different names in the community, including “WIZARD SPIDER,” “UNC1878,” and “Team9.” The malware they use has included TrickBot, Anchor, Bazar, Ryuk, and others. Many in the community have shared reporting about these operators and malware families (check out the end of this blog post for links to some excellent reporting from other teams), so we wanted to focus narrowly on what we’ve observed: BazarLoader/BazarBackdoor (which we’re collectively calling Bazar) used for initial access, followed by deployment of Cobalt Strike, and hours or days later, the potential deployment of Ryuk ransomware. We have certainly seen TrickBot lead to Ryuk ransomware in the past. This month, however, we’ve observed Bazar as a common initial access method, leading to our assessment that Bazar is a greater threat at this time for the eventual deployment of Ryuk.


Getting started with DevOps automation

We often think of the term “DevOps” as being synonymous with “CI/CD”. At GitHub we recognize that DevOps includes so much more, from enabling contributors to build and run code (or deploy configurations) to improving developer productivity. In turn, this shortens the time it takes to build and deliver applications, helping teams add value and learn faster. While CI/CD and DevOps aren’t precisely the same, CI/CD is still a core component of DevOps automation. Continuous integration (CI) is a process that implements testing on every change, enabling users to see if their changes break anything in the environment. Continuous delivery (CD) is the practice of building software in a way that allows you to deploy any successful release candidate to production at any time. Continuous deployment (CD) takes continuous delivery a step further. With continuous deployment, every successful change is automatically deployed to production. Since some industries and technologies can’t immediately release new changes to customers (think hardware and manufacturing), adopting continuous deployment depends on your organization and product. Together, continuous integration and continuous delivery (commonly referred to as CI/CD) create a collaborative process for people to work on projects through shared ownership.


Challenges in operationalizing a machine learning system

Once data is gathered and explored, it is time to perform feature engineering and modeling. While some methods require strong domain knowledge to make sensible decisions feature engineering decisions, others can learn significantly from the data. Models such as logistic regression, random forest, or deep learning techniques are then run to train the algorithms. There are multiple steps involved here and keeping track of experiment versions is essential for governance and reproducibility of previous experiments. Hence, having both the tools and IDE around managing experiments with Jupyter notebook, scripts, and others is essential. Such tools require provisioning of hardware and proper frameworks to allow data scientists to perform their jobs optimally. After the model is trained and performing well, in order to leverage the output of this machine learning initiative, it is essential to deploy the model into a product whether that is on the cloud or directly “on the edge”. ... If you have large set inputs you would like to get the predictions on them without any immediate latency requirements, you can run batch inference in a regular cycle or with a trigger 


The CFO's guide to data management

"New technologies using machine learning, natural language processing, and advanced analytics can help finance leaders fix or work around many data problems without the need for large-scale investment and company-wide upheaval,'' Deloitte said. In fact, such technologies are already being used to help improve corporate-level forecasting, automate reconciliations, streamline reporting, and generate customer and financial insights, according to the firm. Why are CFOs getting involved in data management? "Business decisions based on insights derived from data are now critical to organizational performance and are becoming an essential part of a company's DNA," explained Victor Bocking, managing director, Deloitte Consulting LLP, in a statement. "CFOs and other C-level executives are getting more directly involved, partnering with their CIOs and CDOs [chief data officer] in leading the data initiatives for the parts of the business they are responsible for." As companies generate more and more data each day, finance teams have seemingly limitless opportunities to glean new insights and boost their value to the business. But doing that is easier said than done, the firm noted. The problem is the amount of data emanating daily from various sources can be overwhelming. Deloitte's Finance 2025 series calls this "the data tsunami." 


Can automated penetration testing replace humans?

To answer this question, we need to understand how they work, and crucially, what they can’t do. While I’ve spent a great deal of the past year testing these tools and comparing them in like-for-like tests against a human pentester, the big caveat here is that these automation tools are improving at a phenomenal rate, so depending on when you read this, it may already be out of date. First of all, the “delivery” of the pen test is done by either an agent or a VM, which effectively simulates the pentester’s laptop and/or attack proxy plugging into your network. So far, so normal. The pentesting bot will then perform reconnaissance on its environment by performing scans a human would do – so where you often have human pentesters perform a vulnerability scan with their tool of choice or just a ports and services sweep with Nmap or Masscan. Once they’ve established where they sit within the environment, they will filter through what they’ve found, and this is where their similarities to vulnerability scanners end. Vulnerability scanners will simply list a series of vulnerabilities and potential vulnerabilities that have been found with no context as to their exploitability and will simply regurgitate CVE references and CVSS scores.

 

'Credible threat': How to protect networks from ransomware

Ransomware attacks are becoming more rampant now that criminals have learned they are an effective way to make money in a short amount of time. Attackers do not even need any programming skills to launch an attack because they can obtain code that is shared among the many hacker communities. There are even services that will collect the ransom via Bitcoin on behalf of the attackers and just require them to pay a commission. This all makes it more difficult for the authorities to identify an attacker.Many small and medium-size businesses pay ransoms because they do not backup their data and do not have any other options available to recover their data. They sometimes face the decision of either paying the ransom or being forced out of business ... To prevent from becoming a ransomware victim, organizations need to protect their network now and prioritize resources. These attacks will only continue to grow, and no organization wants to be displayed by the media as being forced to pay a ransom. If you are forced to pay, customers can lose trust in your organization’s ability to secure their personal data and the company can see decreases in revenue and profit.


4 Types Of Exploits Used In Penetration Testing

Stack Based Exploits - This is possibly the most common sort of exploit for remotely hijacking the code execution of a process. Stack-based buffer overflow exploits are triggered when the data above the stack space has been filled out. The stack refers to a chunk of the process memory or a data structure that operates LIFO (Last in first out). The attackers can try to force some malicious code on the stack, which may redirect the program’s flow and perform the malicious program that the attacker intends to implement. The attacker does this by overwriting the return pointer so that the flow of control is passed to malicious code. Integer Bug Exploits - Integer bugs occur due to programmers not foreseeing the semantics of C operations, which are often found and exploited by threat actors. The difference between integer bugs and other exploitation types is that they are often exploited indirectly. Likewise, the security costs of integer bugs are profoundly critical. Since integer bugs are triggered indirectly, it enables an attacker to compromise other aspects of the memory, securing control over an application. Even if you resolve malloc errors, buffer overflows, or even format string bugs, many integer vulnerabilities would still be rendered exploitable.


AI-Enabled DevOps: Reimagining Enterprise Application Development

AI and ML play a key role in accelerating digital transformation across use cases – from data gathering and management to analysis and insight generation. Enterprises that have adopted AI and ML effectively are better positioned to enhance productivity and improve the customer experience by swiftly responding to changing business needs. DevOps teams can leverage AI for seamless collaboration, incident management, and release delivery. They can also quickly iterate and personalize application features via hypothesis-driven testing. For instance, Tesla recently enhanced its cars’ performance through over-the-air updates without having to recall a single vehicle. Similarly, periodic performance updates to biomedical devices can help extend their shelf-life and improve patient care significantly. These are just a few examples of how AI-enabled DevOps can foster innovation to drive powerful outcomes across industries. DevOps teams can innovate using the next-gen, cost-effective AI and ML capabilities offered by major cloud providers like AWS, Microsoft Azure, and Google Cloud. They offer access to virtual machines with all required dependencies to help data scientists build and train models on high power GPUs for demand and load forecasting, text/audio/video analysis, fraud prevention, etc.


What the IoT Cybersecurity Improvement Act of 2020 means for the future of connected devices

With a constant focus on innovation in the IoT industry, oftentimes security is overlooked in order to rush a product onto shelves. By the time devices are ready to be purchased, important details like vulnerabilities may not have been disclosed throughout the supply chain, which could expose and exploit sensitive data. To date, many companies have been hesitant to publish these weak spots in their device security in order to keep it under wraps and their competition and hackers at bay. However, now the bill mandates contractors and subcontractors involved in developing and selling IoT products to the government to have a program in place to report the vulnerabilities and subsequent resolutions. This is key to increasing end-user transparency on devices and will better inform the government on risks found in the supply chain, so they can update guidelines in the bill as needed. For the future of securing connected devices, multiple stakeholders throughout the supply chain need to be held accountable for better visibility and security to guarantee adequate protection for end-users.



Quote for the day:

"The great leaders have always stage-managed their effects." -- Charles de Gaulle

Daily Tech Digest - October 29, 2020

The European startups hacking your brain better than Elon Musk’s Neuralink

Musk may have put a top-notch hardware implant in a pig — but he didn’t mention plans for clinical trials on humans during the event earlier this year, which some expected. BIOS, however, is about to embark on human trials next year. The startup aims to treat diseases, for which we don’t currently have effective drugs, by rewiring the brain. Part of the problem with conditions such as heart failure, arthritis, diabetes and Crohn’s disease is that the signals between the brain and diseased organs are failing. By fixing this could dramatically improve the health and wellbeing of patients. But being able to understand the complex neural codes that connect the brain with organs — and to rewire them — is more complex than what Neuralink has been able to show so far. “We are a bit like Linux if Elon Musk is Microsoft,” the cofounder Emil Hewage tells Sifted. Like Neuralink, BIOS has developed its own implant but is focusing on the data that is extracted from it more instead of making the hardware less clunky. The company was founded by the computer neuroscientist Hewage and the bioengineer Oliver Armitage in Cambridge in 2015 as a way to commercialise all the science that had been achieved in the field in the last 20 years.


'Act of War' Clause Could Nix Cyber Insurance Payouts

To some degree, insurers are making the problem worse. In many ransomware attacks, insurers determine that paying the ransom is the least expensive way for their policyholders to recover. Such payouts, however, also keep extortion rackets in business and attacking other companies. If significant and widespread events become more common, it could have a dramatic impact on the cyber insurance industry, says Chris Kennedy, CISO at AttackIQ, a security-validation firm. "These black-swan events are very costly, and insurance companies are businesses, too," he says. "If we are going to see more and more of these black-swan events, the question is how can insurance companies afford to underwrite these policies? Just like the beaches in Florida or the flooding in Texas — where you can't get insurance anymore — if ransomware continues to be as rampant as it is, cyber insurers are going to back away from covering the damages." The impact of NotPetya on shipping giant A.P. Moller Maersk is a prime example of the risk. The company claimed more than $300 million in damages when the NotPetya worm shut down systems across the company's offices. However, the most significant threat to Maersk's business was that the worm infected and seemingly wiped all of the company's 150-plus domain controllers.


Should Your Enterprise Pick Angular, React or Blazor?

Aside from differences in the languages themselves, there’s also the development environment to consider. It used to be that .NET developers generally used Visual Studio, and anyone building frontend apps with something like React would use a different tool. These days tools like Visual Studio Code have successfully brought the various camps together to varying degrees. Saying that, if your team is comfortable and familiar with one particular set of tools, it may make sense to keep them there. It’s no small undertaking to switch from one coding environment to another. Over time we tend to get used to the tools/configurations we use all day every day: Shortcuts, extensions, themes all help to keep us on track. It’s not an impossible mountain to climb, to switch from one tool or editor to another, but doing so is likely to slow everything down, at least in the short term. If IDEs and text editors are an important factor when it comes to development, how you get your code to production is just as (if not more) important! Angular, React and Blazor all have their own processes for deployment. In general, it’s a matter of running the correct script to package up your apps before deploying them to some form of host.


Overcoming Software Impediments Using Obstacle Boards

The initial accomplishments reaped in the use of our first Obstacle Board were great. However, over time we learned that maintaining the same approach was quite challenging. This particular team actually stopped using the board 3 ½ months after starting the experiment. Reflecting on this stoppage, I would definitely consider changing a few aspects of how we used the board at that time to help it better integrate itself as a permanent feature of our practice, and to educate others hoping to follow in our footsteps. Firstly, while we could see from the previous burndown illustration that the proportion of completed to committed stories is veering towards 100%, we didn’t reach that point within the experiment timeframe. The most likely reason for this was that we didn’t get that initial work balance of stories to obstacles right. Just like teams will use their prior sprint velocity, or perhaps an average in their sprint planning activities, so too should we have tried to better track the time taken on obstacles to adjust that ratio. Secondly, while in this experiment we fixed the definition of an obstacle to be these data validation issues, this proved to impact the longevity of the board usage. As any team grows and develops over time, what causes them to slow down evolves. If you do not revisit the causes of what slows you down regularly, you may not think of those new blockers as obstacles.


How CIOs Can Nurture a Culture of Digital Transformation

Digital transformation projects have traditionally been grounded in the adoption of new business technologies that promise to unlock innovation by streamlining projects and enhancing workflows, but they typically work from the top down in a broad vision. This type of innovation is incapable of keeping up with drastically changing business needs, nor can it compete with today’s rapidly evolving digital landscape where every executive leader is working overtime to stay ahead of market volatility.  Those at the top must focus their attention on high-level initiatives that grow and unite the business. This means that business leaders must shift away from a one-dimensional approach to digital transformation in favor of a modern, hybrid model -- one that engages workers on the frontlines of the business to collaboratively identify lapses in business processes and develop innovative solutions. These are the folks that are closest to the actual work and are best positioned to identify and remediate the problems they face day-to-day. The value these workers can bring to innovation initiatives can be ground-breaking for the business, and in most companies, this potential remains largely untapped.


How Agile Coaching plays a role in unlocking the Future of Work

Now is the time to make empiricism new again. Slow down, bring our community back to three pillars at the heart of agility: ... Transparency - Continuous attention to revealing the system around us, and not the defined processes and procedures. Specific focus and attention to revealing the human and relationship systems within teams and organizations and how they work together to create or impede the delivery of value; Inspection - Two perspectives on inspection are needed for the transition to the future. The first starts with self - how each individual approaches their own personal development & professionalism. Second, systemic development & professionalism - how teams, communities, and cultures collectively pursue mission-driven work. Inquiry should balance ones that are deep and exploratory with others guided by the pursuit of outcome-oriented ways for creating value with customers and constituents. Adaptation - The cycle to break with adaptation is change-for-change-sake. There must be a courageous dismantling of self-limiting beliefs, engrained patterns of behavior, and historical non-value-add metrics. Dismantling these creates space to adapt based on the results of inspection, experimentation, and evaluation of evidence that indicates where and how adjustments should be made.


What is Neuralink?

Neuralink is an ambitious neurotechnology company that’s aiming to upgrade nature’s most complex organ – the human brain. Founded by serial entrepreneur Elon Musk, it hopes to surgically implant tiny devices deep inside the skull, offering the potential to treat brain disorders and other medical problems, and give us the power to interact with and control machines using our minds. The idea currently falls quite firmly in the realm of sci-fi and is either utopian or dystopian, depending on who you talk to. Musk refers to it as a “Fitbit in your skull, with tiny wires”, but this is no easy install. The company would need to insert 3,072 electrodes connected to 96 thin, flexible threads into your brain. ... The human brain has 86 billion neurons, which send and receive information through electric signals via synapses. With Neuralink, each individual thread of the device will be connected in the brain, allowing it to monitor the activity of 1,000 brain neurons. Although that sounds like a small sample, amplified signals are recorded and interpreted as digital instructions, and information is sent back to the brain to stimulate electrical spikes. Data in the prototypes has been transmitted via a wired USB-C connection but the goal has been to create a wireless system.


Mitre ATT&CK: How it has evolved and grown

Despite its gaining popularity, as the data from the joint study found, users continue to have difficulty learning to use the framework. There are two fundamental challenges, Sarukkai said. "A lot of tools didn't have the ability to support it. Enterprises who don't have these products end up doing it manually which they means they aren't fully able to adopt the Mitre ATT&CK framework because they are getting inundated with instances and because they don't have the tooling they need to be effective. That's the biggest reason," he said. The second problem, Sarukkai said, is that organizations want to use ATT&CK to automate remediation and help alleviate the workload on SOC analysts. But such use requires a level of maturity with ATT&CK, and the report found that just 19% of respondents have reached that maturity level. The biggest challenge, according to Pennington, is people being overwhelmed. "We recognize that. ATT&CK for Enterprise, the main knowledge base people are using, is 156 high-level behaviors as of right now. And so, if an organization is going in and trying to just go across and immediately in one pass figure out what their stance is against 156 behaviors, they'll be overwhelmed, and we've seen that," he said.


AIOps, DevSecOps, and Beyond: Exploring New Facets of DevOps

Pushed by the pandemic, many businesses have no choice but to rely on their digital channels, he says. As organizations focus on building up reliability and put preventive measures in place, the effort becomes data intensive, Gilfix says. “People have to sift through logs that come from applications and network devices. They have to set up monitoring and alert tools,” he says. “They have to leverage all these various forms of data to figure out where the application is working, and they have to have mature abilities to build a development staging pipeline.” That means testing the applications, simulating real world needs, and moving change management into product, Gilfix says. Finding skilled professionals capable of performing those tasks quickly with large-scale applications is a challenge. This is where AIOps, the application of artificial intelligence to make sense of that data for DevOps, comes into play, he says. “Issues can be resolved quicker,” Gilfix says. “You can pinpoint similar issues in your applications and fix them preventatively. You can leverage AI to ensure, in a decentralized manner, you’re compliant and manage risk.” AI can also be used to avoid errors downstream in the development process. 


Data Privacy in a Globally Competitive Reality

At a global level, there is a spectrum of consumer data privacy regulations. On one end, the European Union's GDPR gives individuals complete control over their personal data and who can access it. Enterprises processing such data must have strict technical and organizational measures in place to ensure data protection principles such as de-identification practices or full anonymization. When data is being processed, it must be done for one of six lawful reasons and the data subject is able to revoke permission at any time. Although strict data management protects consumers' privacy, from an artificial intelligence point of view it inadvertently may limit access to critical data elements or reduce the size of the data set which ultimately could affect the ability to create accurate algorithms. Additionally, limited-size data sets can greatly impact progress on research developments. On the other end of the spectrum is China. With the largest population of internet users in the world, organizations can collect an enormous amount of data on customers that can be used in enterprise AI solutions. Because there are fewer restrictions about who can view and leverage personal data, Chinese data scientists are in many cases able to use the country's massive data sets as a competitive advantage in developing new AI algorithms.



Quote for the day:

"Confident and courageous leaders have no problems pointing out their own weaknesses and ignorance." -- Thom S. Rainer

Daily Tech Digest - October 28, 2020

IT leaders adjusting to expanded role and importance since coronavirus pandemic

"IT had to ensure that their technical environment could handle the increased online demand, as well any downstream impacts to supply chain, logistics and payment applications all connected to the online engine keeping the company operating and in business. IT had to refocus efforts to enable more robust customer engagements remotely via applications and web portals." She said the best examples of this are insurance claims, government services and applications, most of which were not submitted or enabled via an application or web portal before the COVID-19 pandemic. Despite the increase in importance due to the pandemic, IT has been gaining prominence within enterprises for years, Doebel said. IT has long been moving towards the role of business-critical for several years now as technology and innovation have become synonymous with business growth and improved customer experiences.  IT teams rose to the occasion during the COVID-19 breakout and continue to drive innovation and transformation in these challenging times, she added. Important business decisions are now being put in the hands of IT workers who have to think of ways to future-proof their organizations.


5 famous analytics and AI disasters

In October 2020, Public Health England (PHE), the UK government body responsible for tallying new COVID-19 infections, revealed that nearly 16,000 coronavirus cases went unreported between Sept 25 and Oct 2. The culprit? Data limitations in Microsoft Excel. PHE uses an automated process to transfer COVID-19 positive lab results as a CSV file into Excel templates used by reporting dashboards and for contact tracing. Unfortunately, Excel spreadsheets can have a maximum of 1,048,576 rows and 16,384 columns per worksheet. Moreover, PHE was listing cases in columns rather than rows. ... The "glitch" didn't prevent individuals who got tested from receiving their results, but it did stymie contact tracing efforts, making it harder for the UK National Health Service (NHS) to identify and notify individuals who were in close contact with infected patients. In a statement on Oct. 4, Michael Brodie, interim chief executive of PHE, said NHS Test and Trace and PHE resolved the issue quickly and transferred all outstanding cases immediately into the NHS Test and Trace contact tracing system. PHE put in place a "rapid mitigation" that splits large files and has conducted a full end-to-end review of all systems to prevent similar incidents in the future.


Legal and security risks for businesses unaware of open source implications

The sobering reality is that compliance is not keeping up with usage of open source codebases. In view of this, businesses have to consider the impact of open source software in their operations as they move forward in a digitally connected world. Whether they are developing a product using open source components or involved in mergers and acquisitions activity, they have to conduct due diligence on the security and legal risks involved. One approach that has been proposed is to have a Bill of Materials (BOM) for software. Just like BOM used commonly by manufacturers of hardware, such as smartphones, a BOM for software will list the components and dependencies for each application and offer more visibility. In particular, a BOM generated by an independent software composition analysis (SCA) will offer advanced understanding for businesses seeking to understand the foundation on which they are building so many of their applications. Awareness is key to improvement. For starters, businesses cannot patch what they don't know they have. Patches must match source, so they know their code's origin. Open source is not only about source, either. 


Building a hybrid SQL Server infrastructure

The solution to this challenge is to build a SANless failover cluster using SIOS DataKeeper. SIOS DataKeeper performs block-level replication of all the data on your on-prem storage to the local storage attached to your cloud-based VM. If disaster strikes your on-prem infrastructure and the WSFC fails SQL Server over to the cloud-based cluster node, that cloud-based node can access its own copy of your SQL Server databases and can fill in for your on-prem infrastructure for as long as you need it to. One other advantage afforded by the SANless failover cluster approach is that there is no limit on the number of databases you can replicate. Where you would need to upgrade to SQL Server Enterprise Edition to replicate your user databases to a third node in the cloud, the SANless clustering approach works with both the SQL Server Standard and Enterprise editions. While SQL Server Standard Edition is limited to two nodes in the cluster, DataKeeper allows you to replicate to a third node in the cloud with a manual recovery process. With Enterprise Edition the third node in the cloud can simply be part of the same cluster.


Why Enterprises Struggle with Cloud Data Lakes

The success of any cloud data lake project hinges on continual changes to maximize performance, reliability and cost efficiency. Each of these variables require constant and detailed monitoring and management of end-to-end workloads. Consider the evolution of data processing engines and the importance of leveraging the most advantageous opportunities around price and performance. Managing workload price performance and cloud cost optimization is just as crucial to cloud data lake implementations, where costs can and will quickly get out of hand if proper monitoring and management aren’t in place. ... Public cloud resources aren’t private by default. Securing a production cloud data lake requires extensive configuration and customization efforts–especially for enterprises that must fall in line with specific regulatory compliance oversights and governance mandates (HIPAA, PCI DSS, GDPR, etc). Achieving the requisite data safeguards often means enlisting experienced and dedicated teams who are equipped to lock down cloud resources and restrict access to only users that are authorized and credentialed.


The No-Code Generation is arriving

Of course, no-code tools often require code, or at least, the sort of deductive logic that is intrinsic to coding. You have to know how to design a pivot table, or understand what machine learning capability is and what it might be useful for. You have to think in terms of data, and about inputs, transformations and outputs. The key here is that no-code tools aren’t successful just because they are easier to use — they are successful because they are connecting with a new generation that understands precisely the sort of logic required by these platforms to function. Today’s students don’t just see their computers and mobile devices as consumption screens and have the ability to turn them on. They are widely using them as tools of self-expression, research and analysis. Take the popularity of platforms like Roblox and Minecraft. Easily derided as just a generation’s obsession with gaming, both platforms teach kids how to build entire worlds using their devices. Even better, as kids push the frontiers of the toolsets offered by these games, they are inspired to build their own tools. There has been a proliferation of guides and online communities to teach kids how to build their own games and plugins for these platforms (Lua has never been so popular).


Digital transformation: 4 contrarian tips for measuring success

A CIO once told me that his employees felt confused about how their transformation progress was going. I asked, “How many transformations are you doing right now?” He started listing and realized that his team had 15 simultaneous ongoing changes. Worse, every change included different touchpoints for every individual end user, which created even more confusion for those who didn’t understand why the change was happening. Every incremental digitalization initiative should have a person or team responsible for it – the CIO, CTO, or CEO, or perhaps the internal services organization if it’s driving internal efficiency. In the cases of disruptive innovation, it should take place where it's easy to let go of the past ways of doing things, typically in a separate innovation unit. Measure the outcomes you’re looking to achieve and communicate from an outcome perspective, often through a story – and if your transformation does not fit into your objectives and key results or KPIs ... However, too much of either can hurt your progress and indicate a wider problem in your organization: Either you sweep negative feedback under the rug and focus only on the positive, which creates a culture of fear, or you focus only on the negative and forget to celebrate the good stuff, which can destroy motivation and cause a complaint culture.


Role Of E-Commerce In Driving Technology Adoption For Indian Warehousing Sector

Global supply chains and logistics sectors have undergone a major disruption during the past few months, thanks to the pandemic. Several first-time users logged on to e-commerce websites to make safe, virtual purchases for essentials and had a contactless delivery experience at their doorstep. The sector also witnessed a major shift in popular categories, from luxury and lifestyle purchases to shopping for basic essentials such as groceries, medicines, office and school supplies, e-learning tools and even food delivery. As per an impact report released by Uni-commerce, titled E-commerce Trends Report 2020, e-commerce has witnessed an order-volume growth of 17 per cent as of June 2020, and about 65 per cent growth in single brand e-commerce platforms. However, in-spite of challenges such as manufacturing slowdown, shortage of labour, transportation bottlenecks, and disruption in national and international movement of cargo, the massive rise of e-commerce has brought about faster digital adoption and enhanced the potential for overall growth of the sector. With a focus on meeting consumer expectations for speedy delivery, customization, product availability and easy returns while handling complex globalization of supply chains, warehousing trends have witnessed major shifts.


A robot referee can really keep its ‘eye’ on the ball

Human umps may feel hot or tired. They may have the sun in their eyes or become distracted by a mosquito. They may even unintentionally favor players of certain nationalities, races, ages or backgrounds. A machine will not experience any of these problems. So how does the machine do it? Engineers must first spend several days setting up each stadium that will use the system. They measure the precise position of all the lines and “create a virtual-reality world to mirror what is in the stadium,” explains Hicks. They also set up 12 cameras. These will watch every part of the area where the game takes place. Then the engineers run tests — lots of them — to make sure everything works as it should. During a match, those cameras capture a ball’s flight. Software finds the tennis ball in the video. It can do this in bright, overcast or shadowy conditions. A video camera doesn’t capture every single moment of the ball’s flight, however. It actually takes many still photos very quickly. The number of photos it can take in one second is called the frame rate. In each frame, the ball will be in a new position. The system uses math to calculate a smooth path between all these positions. It also takes wind conditions into account.


That dreadful VPN might finally be dead thanks to Twingate

So what does Twingate ultimately do? For corporate IT professionals, it allows them to connect an employee’s device into the corporate network much more flexibly than VPN. For instance, individual services or applications on a device could be setup to securely connect with different servers or data centers. So your Slack application can connect directly to Slack, your JIRA site can connect directly to JIRA’s servers, all without the typical round-trip to a central hub that VPN requires. That flexibility offers two main benefits. First, internet performance should be faster, since traffic is going directly where it needs to rather than bouncing through several relays between an end-user device and the server. Twingate also says that it offers “congestion” technology that can adapt its routing to changing internet conditions to actively increase performance. More importantly, Twingate allows corporate IT staff to carefully calibrate security policies at the network layer to ensure that individual network requests make sense in context. For instance, if you are salesperson in the field and suddenly start trying to access your company’s code server, Twingate can identify that request as highly unusual and outright block it.



Quote for the day:

"In simplest terms, a leader is one who knows where he wants to go, and gets up, and goes." -- John Erksine

Daily Tech Digest - October 27, 2020

How realistic is the promise of low-code?

“Grady Booch, one of the fathers of modern computer science, said the whole history of computer scientists layering is adding new layers of abstraction. On top of existing technology, low-code is simply a layer of abstraction that makes the process of defining logic, far more accessible for the most people. “Even children are being taught the code programming through languages such as MIT‘s scratch, a visual programming language. Just like humans communicate through both words and pictures with a picture, being worth roughly 1000 words. So, developers can develop using both code, and low-code or visual programming languages. “Visual language is much more accessible for many people, as well, much safer. So many business users who are great subject matter experts can make small dips into defining logic or user interfaces, through low-code systems, without necessarily having to commit hours and days to developing a feature through more sophisticated methods.” ...  Tools that use a visual node editor to create code paths are impressive but the code still exists as a base layer for advanced control. I once built a complete mobile video game using these visual editors. Once workflows get slightly more complex it’s helpful to be able to edit the code these tools generate.


“The Surgical Team” in XXI Century

In the surgical team of XXI century, every artifact shall have a designated owner. With ownership comes responsibility for quality of the artifact which is assessed by people who consume it (for example, consumers of designs are developers, and consumers of code are other developers who need to review it or interface with it). Common ownership as advocated by Extreme Programming can only emerge as the highest form of individual ownership in highly stable teams of competent people who additionally developed interpersonal relationships (a.k.a. friendship), and feel obligated to support one another. In other situations, collective ownership will end up with tragedy of commons caused by social loathing. Each team member will complete his assignments with least possible effort pushing consequences of low quality on others (quality of product artifacts becomes "the commons"). This is also the reason why software development outsourcing is not capable of producing quality solutions. The last pillar is respect. It is important for architect and administrator not to treat developers, testers and automation engineers as replaceable grunts (a.k.a. resources). An architect being the front-man of the team needs to be knowledgeable and experienced but it doesn’t mean that developers or testers aren’t. 


The great rebalancing: working from home fuels rise of the 'secondary city'

There are already signs of emerging disparity. Weekday footfall in big urban centres, which plummeted during lockdown, has not bounced back – the latest figures suggest less than one-fifth of UK workers have returned to their physical workplaces – which has led to reductions in public transport. This disadvantages low-income workers and people of colour, and has led to job losses at global chains such as Pret a Manger and major coffee franchises. Meanwhile, house prices in the Hamptons have reached record highs as wealthy New Yorkers have opted to weather the pandemic at the beach. Companies have also started capitalising on reduced occupancy costs – potentially passing them on to workers. The US outdoors retailer REI plans to sell its brand-new Seattle campus, two years in the making, in favour of smaller satellite sites. In the UK, government contractor Capita is to close more than a third of its 250 offices after concluding its 45,000 staff work just as efficiently at home. Not every community will be able to take advantage of the remote working boom, agrees Serafinelli. Those best placed to do so already have – or are prepared to invest in – good-quality schools, healthcare and transport links.


Deno Introduction with Practical Examples

Deno was originally announced in 2018 and reached 1.0 in 2020, created by the original Node.js founder Ryan Dahl and other mindful contributors. The name DE-NO may seem odd until you realize that it is simply the interchange of NO-DE. The Deno runtime: Adopts security by default. Unless explicitly allowed, Deno disallows file, network, or environment access; Includes TypeScript support out-of-the-box; Supports top-level await; Includes built-in unit testing and code formatting (deno fmt); Is compatible with browser JavaScript APIs: Programs authored in JavaScript without the Deno namespace and its internal features should work in all modern browsers; Provides a one-file executable bundler through deno bundle command which lets you share your code for others to run without installing Deno. ... Putting simplicity and security into consideration, Deno ships with some browser-related APIs which allows you to create a web server with little or no difference from a client-side JavaScript application, with APIs including fetch(), Web Worker and WebAssembly. You can create a web server in Deno by importing the http module from the official repo. Although there are already many libraries out there, the Deno system has also provided a straightforward way to accomplish this.


How to Successfully Integrate Security and DevOps

As digitalization transforms industries and business models, organizations increasingly are adopting modern software engineering practices such as DevOps and agile to become competitive in the modern marketplace. DevOps enables organizations to release new products and features faster, but this pace and frequency of application releases can conflict with established practices of handling security and compliance. This leads to the enterprise paradox to go faster and innovate but stay secure by avoiding compromises on controls. However, integrating security into DevOps efforts (DevSecOps) across the whole product life cycle rather than being handled independently or left until the end of the development process after a product is released can help organizations significantly reduce their risk posture, making them more agile and their products more secure and reliable. When properly implemented, DevSecOps offers immense benefits such as easy remediation of vulnerabilities and a tool to mitigate against cost overruns due to delays. It also enables developers to tackle security issues more quickly and effectively.


Forrester: CIOs must prepare for Brexit data transfer

According to the Information Commissioner’s Office (ICO), while the government has said that transfers of data from the UK to the European Economic Area (EEA) will not be restricted, from the end of the transition period, unless the EC makes an adequacy decision, GDPR transfer rules will apply to any data coming from the EEA into the UK. The ICO website recommended that businesses consider what GDPR safeguards they can put in place to ensure that data can continue to flow into the UK. Forrester also highlighted the lack of an adequacy decision, which it said would impact the supply chain of all businesses that rely on technology infrastructure in the UK when dealing with European citizens’ personal data. The analyst firm predicted that cloud providers will start to provide a way for their customers to make this transition. The authors of the report recommended that companies should focus on assessing compliance with UK data protection requirements, including the UK’s GDPR, and determine how lack of an adequacy decision will impact data transfers and work on a transition strategy. While the ICO is the UK’s supervisory authority (SA) for the GDPR, in July the European Data Protection Board (EDPB) stated that it will no longer qualify as a competent SA under the GDPR at the end of the transition period.


Ransomware vs WFH: How remote working is making cyberattacks easier to pull off

"You have a much bigger attack surface; not necessarily because you have more employees, but because they're all in different locations, operating from different networks, not working with the organisation's perimeter network on multiple types of devices. The complexity of the attack surface grows dramatically," says Shimon Oren, VP of research and deep learning at security company Deep Instinct. For many employees, the pandemic could have been the first time that they've ever worked remotely. And being isolated from the corporate environment – a place where they might see or hear warnings over cybersecurity and staying safe online on a daily basis, as well as being able to directly ask for advice in person, makes it harder to make good decisions about security. "That background noise of security is kind of gone and that makes it a lot harder and security teams have to do a lot more on messaging now. People working at home are more insular, they can't lean over and ask 'did you get a weird link?' – you don't have anyone do to that with, and you're making choices yourself," says Sherrod DeGrippo, senior director of threat research at Proofpoint. "And the threat actors know it and love it. We've created a better environment for them," she adds.


Machine learning in network management has promise, challenges

It’s difficult to say how rapidly enterprises are buying AI and ML systems, but analysts say adoption is in the early stages. One sticking point is confusion about what, exactly, AI and ML mean. Those imagining AI as being able to effortlessly identify attempted intruders, and to analyze and optimize traffic flows will be disappointed. The use of the term AI to describe what’s really happening with new network management tools is something of an overstatement, according to Mark Leary, research director at IDC. “Vendors, when they talk about their AI/ML capabilities, if you get an honest read from them, they’re talking about machine learning, not AI,” he said. There isn’t a hard-and-fast definitional split between the two terms. Broadly, they both describe the same concept—algorithms that can read data from multiple sources and adjust their outputs accordingly. AI is most accurately applied to more robust expressions of that idea than to a system that can identify the source of a specific problem in an enterprise computing network, according to experts. “We’re probably overusing the term AI, because some of these things, like predictive maintenance, have been in the field for a while now,” said Jagjeet Gill, a principal in Deloitte’s strategy practice.


The Past and Future of In-Memory Computing

“With the explosion in the adoption of IoT (which is soon to be catalyzed by 5G wireless networking), countless data sources in our daily life now generate continuous streams of data that need to be mined to save lives, improve efficiency, avoid problems and enhance experiences,” Bain says in an email to Datanami. “Now we can track vehicles in real-time to keep drivers safe, ensure the safe and rapid delivery of needed goods, and avoid unexpected mechanical failures. Health-tracking devices can generate telemetry that enables diagnostic algorithms to spot emerging issues, such as heart irregularities, before it becomes urgent. Web sites can track e-commerce shoppers to assist them in finding the best products that meet their needs.” IMDGs aren’t ideal for all streaming or IoT use cases. But when the use case is critical and time is of the essence, IMDGs will be have a role in orchestrating the data and providing fast response times. “The combination of memory-based storage, transparent scalability, high availability, and integrated computing offered by IMDGs ensures the most effective use of computing resources and leads to the fastest possible responses,” Bain writes. “Powerful but simple APIs enable application developers to maintain a simplified view of their data and quickly analyze it without bottlenecks. IMDGs offer the combination of power and ease of use that applications managing live data need more than ever before.”


Work from home strategies leave many companies in regulatory limbo

A solution for this crucial predicament is a potential temporary regulatory grace period. Regulatory bodies or lawmakers could establish a window of opportunity for organizations to self-identify the type and duration of their non-compliance, what investigations were done to determine that no harm came to pass, and what steps were, or will be, taken to address the issue. Currently, the concept of a regulatory grace period is slowly gaining traction in Washington, but time is of the essence. Middle market companies are quickly approaching the time when they will have to determine just what to disclose during these upcoming attestation periods. Companies understand that mistakes were made, but those issues would not have arisen under normal circumstances. The COVID-19 pandemic is an unprecedented event that companies could have never planned for. Business operations and personal safety initially consumed management’s thought processes as companies scrambled to keep the lights on. Ultimately, many companies made the right decisions from a business perspective to keep people working and avoid suffering a data breach, even in a heightened environment of data security risks. Any grace period would not absolve the organization of responsibility for any regulatory exposures.



Quote for the day:

"Our expectation in ourselves must be higher than our expectation in others." -- Victor Manuel Rivera

Daily Tech Digest - October 26, 2020

How to hold Three Amigos meetings in Agile development

Three Amigos meetings remove uncertainty from development projects, as they provide a specified time for everyone to get on the same page about what to -- or not to -- build. "The meeting exposes any potential assumptions and forces explicit answers," said Jeff Sing, lead software QA engineer at Optimizely, a digital experience optimization platform. "Everyone walks away with crystal-clear guidelines on what will be delivered and gets ahead of any potential scope creep." For example, a new feature entails new business requirements, engineering changes, UX flow and design. Each team faces its own challenges and requirements. The business requirements focus on a broad problem space, and how to monetize the product. The engineering requirements center on the technical solution and hurdles. The UX requirements define product usability. The design requirements ensure the product looks finished. All of these requirements might align -- or they might not. "This is why a formalized meeting needs to occur to hash out how to achieve everyone's goals, or which requirements will not be met and need to be dropped in order to build the right product on the right time schedule," Sing said.


Key success factors behind intelligent automation

For an intelligent automation programme to really deliver, a strategy and purpose is needed. This could be improving data quality, operational efficiency, process quality and employee empowerment, or enhancing stakeholder experiences by providing quicker, more accurate responses. Whatever the rationale, an intelligent automation strategy must be aligned to the wider needs of the business. Ideally, key stakeholders should be involved in creating the vision; if they haven’t, engage them now. If they see intelligent automation as a strategic business project, they’ll support it and provide the necessary financial and human resources too. Although intelligent automation is usually managed by a business team, it will still be governed by the IT team using existing practices, so they must also be involved at the beginning. IT will support intelligent automation on many critical fronts, such as compliance with IT security, auditability, the supporting infrastructure, its configuration and scalability. So intelligent automation can scale as demand increases, plan where it sits within the business. A centralised approach encompasses the entire organisation, so it may be beneficial to embed this into a ‘centre of excellence’ (CoE) or start moving towards creating this operating environment.,/div.


Why Most Organizations’ Investments in AI Fall Flat

A common mistake companies make is creating and deploying AI models using Agile approaches fit for software development, like Scrum or DevOps. These frameworks traditionally require breaking down a large project into small components so that they can be tackled quickly and independently, culminating in iterative yet stable releases, like constructing a building floor by floor. However, AI is more like a science experiment than a building. It is experiment-driven, where the whole model development life cycle needs to be iterated—from data processing to model development and eventually monitoring—and not just built from independent components. These processes feed back into one another; therefore, a model is never quite “done.” ... We know AI requires specialized skill sets—data scientists remain highly sought-after hires in any enterprise. But it’s not just the data scientists who build the models and product owners who manage the functional requirements who are necessary in order for AI to work. The emerging role of machine-learning engineer is required to help scale AI into reusable and stable processes that your business can depend on. Professionals in model operations (model ops) are specialized technicians who manage post-deployment model performance and are ultimately responsible for ongoing stability and continuity of operations.


Cybersecurity as a public good

The necessity to privately provision cyber security has resulted in a significant gap between the demand for cyber security professionals and the supply of professionals with appropriate skills. Multiple studies have identified cyber security as the domain with one of the highest skills gap. When a significant skills gap occurs in the market, it results in two things. The remuneration demanded by the professionals will sky rocket since there are many chasing the scarce resources. Professionals who are not so skilled will also survive — rather thrive — since lack of alternatives means they will continue to be in demand. ...  Security as a public good involves trade-offs with privacy. Whether it is police patrols, or CCTV cameras — a trade-off with privacy is imperative to make security a public good. The privacy trade-off risks will be higher in the cyber world because technology would provide the capability to conduct surveillance at larger scale and also larger depth. It is crucial , delicate — and hence difficult — to strike the right balance between security and privacy such that the extent of privacy sacrificed meets the test of proportionality. However, the complexity of the task, or the associated risks with it, should not prevent us from getting out of the path down a rabbit hole.


The Art and Science of Architecting Continuous Intelligence

Loosely defined, machine data is generated by computers rather than individuals. IoT equipment sensors, cloud infrastructure, security firewalls and websites all throw off a blizzard of machine data that measures machine status, performance and usage. In many cases the same math can analyze machine data for distinct domains, identifying patterns, outliers, etc. Enterprises have well-established processes such as security information and event management (SIEM), and IT operations (ITOps), that process machine data. Security administrators, IT managers and other functional specialists use mature SIEM and ITOps processes on a daily basis. Generally, these architectures perform similar functions as in the first approach, although streaming is a more recent addition. Another difference is that many machine-data architectures have more mature search and index capabilities, as well as tighter integration with business tasks and workflow. Data teams typically need to add the same two functions to complete the CI picture. First, they need to integrate doses of contextual data to achieve similar advantages as those outlined above. Second, they need to trigger business processes, which in this case might mean hooking into robotic process automation tools.


Fintech Startups Broke Apart Financial Services. Now The Sector Is Rebundling

When fintech companies began unbundling, the tools got better but consumers ended up with 15 personal finance apps on their phones. Now, a lot of new fintechs are looking at their offerings and figuring out how to manage all of a person’s personal finances so that other products can be enhanced, said Barnes. “We are not trying to be a bunch of products, but more about how each product helps the other,” Barnes said. “If we offer a checking account, we can see income coming in and be able to give you better access to borrowing. That is the rebuild—how does fintech serve all of the needs, and how do we leverage it for others?” Traditional banking revolves around relationships for which banks can sell many products to maximize lifetime value, said Chris Rothstein, co-founder and CEO of San Francisco-based sales engagement platform Groove, in an interview. Rebundling will become a core part of workflow and a way for fintechs to leverage those relationships to then be able to refer them to other products, he said. “It makes sense long-term,” Rothstein said in an interview. “In financial services, many people don’t want all of these organizations to have their sensitive data. Rebundling will also force incumbents to get better.”


Microsoft Glazes 5G Operator Strategy

Microsoft’s 5G strategy links the private Azure Edge Zones service it announced earlier this year, Azure IoT Central, virtualized evolved packet core (vEPC) software it gained by acquiring Affirmed Networks, and cloud-native network functions it brought onboard when it acquired Metaswitch Networks. Combining those services under a broader portfolio allows Microsoft to “deliver virtualized and/or containerized network functions as a service on top of a cloud platform that meets the operators where they are, in a model that is accretive to their business,” Hakl said.  “We want to harness the power of the Azure ecosystem, which means the developer ecosystem, to help [operators] monetize network slicing, IoT, network APIs … [and] use the power of the cloud” to create the same type of elastic and scalable architecture that many enterprises rely on today, he explained. That vision is split into two parts: the Azure Edge Zones, which effectively extends the cloud to a private edge environment, and the various pieces of software that Microsoft has assembled for network operators. On the latter, Hakl said Microsoft “could have gone out and had our customers teach us that over time. Instead, we acquired two companies that brought in hundreds of engineers that have telco DNA and understand the space.”


Artificial intelligence for brain diseases: A systematic review

Among the various ML solutions, Deep Neural Networks (DNNs) are nowadays considered as the state-of-the-art solution for many problems, including tasks on brain images. Such human brain-inspired algorithms have been proven to be capable of extracting highly meaningful statistical patterns from large-scale and high-dimensional datasets. A DNN is a DL algorithm aiming to approximate some function f ∗. For example, a classifier can be seen as a function y = f * ( x , θ ) mapping a given input x to a category labeled as y. θ is the vector of parameters that the model learns in order to make the best approximation of f ∗. Artificial Neural Networks (ANNs) are built out of a densely interconnected set of simple units, where each unit takes a number of real-valued inputs (possibly the outputs of other units) and produces a single real-valued output (which may become the input to many other units). DNNs are called networks because they are typically represented by composing together many functions. The overall length of the chain gives the depth of the model; from this terminology, the name “deep learning” arises. 


Things to Consider about Brain-Computer Interface Tech

A BCI is a system that provides a direct connection between your brain and an electronic device. Since your brain runs on electrical signals like a computer, it could control electronics if you could connect the two. BCIs attempt to give you that connection. There are two main types of BCI — invasive and non-invasive. Invasive devices, like the Neuarlink chip, require surgery to implant them into your brain. Non-invasive BCIs, as you might’ve guessed, use external gear you wear on your head instead. ... A recent study suggested that brain-computer interface technology and NeuraTech in general could measure worker comfort levels in response to their environment. They could then automatically adjust the lights and temperature to make workers more comfortable and minimize distractions. Since distractions take up an average of 2.1 hours a day, these BCIs could mean considerable productivity boosts. The Department of Defense is developing BCIs for soldiers in the field. They hope these devices could let troops communicate silently or control drones with their minds. As promising as BCIs may be, there are still some lingering concerns with the technology. While the Neuralink chip may be physically safe, it raised a lot of questions about digital security. 


Microsoft did some research. Now it's angry about what it found

A fundamental problem, said Brill is the lack of trust in society today. In bold letters, she declared: "The United States has fallen far behind the rest of the world in privacy protection." I can't imagine it's fallen behind Russia, but how poetic if that was true. Still, Brill really isn't happy with our government: "In total, over 130 countries and jurisdictions have enacted privacy laws. Yet, one country has not done so yet: the United States." Brill worries our isolation isn't too splendid. She mused: "In contrast to the role our country has traditionally played on global issues, the US is not leading, or even participating in, the discussion over common privacy norms." That's like Microsoft not participating in the creation of excellent smartphones. It's not too smart. Brill fears other parts of the world will continue to lead in privacy, while the US continues to lead in inaction and chaos. It sounds like the whole company is mad as hell and isn't going to take it anymore. Yet it's not as if Microsoft has truly spent the last 20 years championing privacy much more than most other big tech companies. In common with its west coast brethren, it's been too busy making money.



Quote for the day:

"Leadership is about carrying on when everyone else has given up" -- Gordon Tredgold