Daily Tech Digest - February 05, 2021

Riding out the wave of disruption

Disruption is not necessarily the crisis it’s frequently considered to be for incumbents, the researchers stress. Two technologies can often coexist in the marketplace for a significant period. Thus, it’s important for incumbent companies not to overreact. They should target dual users and reexamine the factors that have led to the old technology sticking around for so long. Of course, the profit implications of cannibalization of the old technology and leapfrogging depend on which type of firm is trumpeting the new technology. New entrants will always stand to gain when they introduce a technology that takes off. But incumbents rolling out a successive technology will also gain if their competitors would have introduced it anyway or if the 2.0 version has a higher profit margin than the original. The authors write, “Leapfroggers are an opportunity loss for incumbents, but switchers are a real loss.” Regardless of the predictive model they use, marketers should strive to understand how the various consumer segments identified in this study will grow or shrink over time and use that information in their forecasts of early sales or market penetration of successive technologies.


AI and APIs: The A+ Answers to Keeping Data Secure and Private

Adding to the complexity is ensuring that AI and data are used ethically, Marques points out. Two key categories comprise secure AI, he says: responsible AI and confidential AI. Responsible AI focuses on regulations, privacy, trust, and ethics related to decision-making using AI and ML models. Confidential AI involves how companies share data with others to address a common business problem. For example, airlines might want to pool data to better understand maintenance, repair, and parts failure issues but avoid exposing proprietary data to the other companies. Without protections in place, others might see confidential data. The same types of issues are common among healthcare companies and financial services firms. Despite the desire to add more data to a pool, there are also deep concerns about how, where, and when the data is used. In fact, complying with regulations is merely a starting point for a more robust and digital-centric data management framework, Jahil explains. Security and privacy must extend into a data ecosystem and out to customers and their PII. For example, CCPA has expanded the concept of PII to include any information that may help identify the individual, like hair color or personal preferences.

What is a data center REIT?

The rationale for converting to REIT status will vary from company to company, but broadly it offers beneficial tax status and greater access to capital for growth. “The biggest benefit is that REITs don’t pay any corporate tax,” says Millionacre’s Frankel. “Think of a data center company that isn't a REIT. Its income can effectively be taxed twice; once at the corporate level when the company earns a profit, and again on the individual level when the company pays a dividend to investors.” The rules on whether an organization can apply for REIT classification vary from country to country, but broadly having a portfolio of properties from which real-estate activities such as rent is the majority of your revenue is derived, and having a number of investors to which you provide the majority of that revenue, is the minimum requirement. “REITs are able to raise capital more easily via share issuances and/or joint venture partnerships as investors have a better idea of the company’s financial situation once public, says Cushman & Wakefield’s Imboden. “The degree of difficulty [on becoming a REIT] depends largely on if the company was structured and managed with the intention of becoming a REIT, or if the decision was made after years of operating.”


12 security career-killers (and how to avoid them)

“The biggest problem I’ve seen is security people who think security is the be-all and end-all. They go in with that attitude, and they don’t see how they have to enable the business,” says James Carder, CSO of the security tech company LogRhythm. He says they instead need to collaborate with their business-unit colleagues to understand their objectives and then be an enabler, not a hinderance. Others agree. “Security is a profession that has plenty of standards and regulations and frameworks, but too many times we try to implement them in a blind way, from the perspective of the standards instead of trying to implement them in the context of the business,” adds Russ Kirby, CISO of software company ForgeRock. Similarly, Kirby has seen security pros become so focused on their own objectives that they alienate other departments that may otherwise want to work together to find a solution. He points to one scenario, where security staffers wanted to change an application’s minimum password length from 8 characters to upwards of 20. The IT application team pushed back, explaining that they could go to 12 characters but anything more would take significant time and money to change.


Six industries impacted by the combination of 5G and edge computing

"Weather and humidity can impact the performance of 5G,'' Roberts added; he also noted that, as 5G continues to proliferate, there will be many more cell towers. That's consistent with recently released research by PwC, which reported that "the performance of 5G networks remains uneven." Widespread usage is not here yet "because it's a big challenge to upgrade infrastructure," agreed Mark Sami, a director at West Monroe. Right now, for example, to get Verizon's Ultra Wideband network, "you need a line of sight to a tower so you have to be in close proximity," Sami said. ... "It's all about driving applications and how do you make these 5G and edge solutions [work] in a manner where you create more opportunities for the developer community to write applications to that infrastructure architecture,'' said Sid Nag, a vice-president at Gartner. Some 90% of industrial enterprises will use edge computing by 2020, according to Frost & Sullivan. "The applications are endless,'' observed Chris Steffen, a research director at Enterprise Management Associates. "Every vertical is going to be impacted in some way,'' he added, depending on specific use cases and relevance.


Why Disconnected Data Grinds Customer Journeys to a Halt

Business architecture matters because it defines and explains the relationships between customer business processes. And information and application architecture matter because they define the major types of information and the applications that process customer data. Clearly, this kind of systems thinking is essential to defining holistic customer journeys — or in the language of marketing, the friction points between customer facing systems and data that flows between them. Thinking this way raises questions like why customers need to interface with applications separately and why they have to enter data multiple times when interacting with these separate applications — two big sources of customer journey friction. Data limits the quality of the customer journey at three major points: a company’s sales, marketing and service processes. According to economist Theodore Levitt, any sales and marketing processes should focus on the following: “the role of marketing is creating and keeping the customer.” To create or obtain new customers, organizations must simplify the processes to become a customer, regardless of the customer channel chosen. In practice this means integrating customer facing systems, so customers enter information only once.


Rust Could Be the Secret to Next-Gen Computing

The team think there are good prospects for using ‘rust’ to create super-efficient computers. This is because although very simple in architecture, the Fe2O3-based device where merons and bimerons were found already contains all the ingredients to manipulate these tiny bits quickly and efficiently – by flowing a tiny electrical current in an extremely thin metallic ‘overcoat’. In fact, the team state that controlling and observing the movement of merons and bimerons in real time is the goal of a future X-ray microscopy experiment, currently in the planning phase. Moving from basic to applied research means cost and compatibility considerations are of paramount importance. While iron oxide is extremely abundant and cheap, the fabrication techniques employed by researchers at Singapore and Madison are complex and require atomic-scale control. However, the team are optimistic as they recently demonstrated that is possible to ‘peel off’ a thin layer of oxide from its growth medium and stick it almost anywhere, with its properties being largely unaffected. They say their next steps will be the design and fabrication of proof-of-principle devices based on ‘cosmic strings’ .


New Opportunities from Tech-Driven Industry Convergence

When we study the evolution of information technology, we find that companies traditionally leveraged technology solutions to serve specific business functions within an industry. For example, in life sciences or pharmaceutical companies, technology solutions were usually grouped by function such as commercial, R&D, and supply chain. Most answers were explicitly designed for the specific process and had little scope for portability across sectors. However, as technologies evolved, solutions have become increasingly broad-based and sector-agnostic. While cloud and high-tech companies still provide industry-specific solutions, there is a convergence in the types of problems they solve for customers across industries. ... As the lines are getting blurred, we need to rethink our traditional approach to grouping various sectors when building technology solutions. For instance, all consumer-facing industries such as CPG, pharma, insurance, and manufacturing are likely to have significant overlap in the challenges they face. Similarly, healthcare, finance, medical devices, retail, and telecommunications are likely to find common ground.


Networking software can ease the complexity of multicloud management

Cloud providers offer essential tools in three key areas: security, networking, and management and orchestration (MANO). Their security capabilities and controls often must be manually implemented, and their networking requires that their on-ramps and off ramps—which providers optimize--be specifically routed. Each cloud has its own MANO tools to provide management, visibility, and automation tools that must be set in order to gain visibility see and tune application performance. That means a learning curve and fragmented MANO for enterprise IT teams that support multicloud environments. These factors combine to make many IT operations involving IaaS multiclouds difficult to scale and the task of troubleshooting performance slowdowns tedious and time consuming. The leading IaaS providers are building new access capabilities at the edge of their networks. Key to user experience is network performance, which relies on network routing to and from the nearest cloud on-ramp. Leveraging WAN network intelligence is essential to delivering a reliable, high quality experience between applications in the public cloud and end-users. Enterprise IT will require the network intelligence to connect to the best IaaS point of presence to accelerate application delivery.


The transportation sector needs a standards-driven, industry-wide approach to cybersecurity

We have already witnessed attacks on electronic charging stations via the Near-Field Communication (NFC) card, which handles billing for EV charging. The ID cards have inherent vulnerabilities due to third-party providers not securing customer data. Research has shown malicious individuals can copy these cards and use them to charge other vehicles. Another concern is related to traditional lithium-ion batteries, which are used in EVs and have the potential to explode. While this issue is being addressed by battery suppliers with investment in R&D, this safety effort must also consider the risk of cyber attacks. If it’s known that a battery in an EV can explode, this may increase the likelihood that a bad actor may target this type of car with the intent to cause harm. As EV battery technology advances, it’s imperative that comprehensive cybersecurity measures evolve and improve in parallel so automakers and technology providers can prevent this type of hacking from occurring. As the AV industry advances, so will the incentives for hackers. There is an increased potential for financial crimes committed via ransomware attacks. Further, these attacks could cause vehicles to behave abnormally, potentially endangering human lives.



Quote for the day:

"To accomplish great things, we must not only act, but also dream, not only plan but also believe." -- Anatole France

Daily Tech Digest - February 04, 2021

5 Trends for Industry 4.0: The Factory of the Future

The growing complexity of machine software as well as the ongoing modularization of modern production equipment has led to more simulation upfront. The fact that international travel for commissioning or service has significantly reduced or in some cases halted these days reinforces this trend. Functional tests of production equipment of the future will be performed using comprehensive models for simulation and virtual commissioning. The factory of the future will be built twice—first virtually, then physically. Digital representations of production machines continuously fed with live data from the field will be used for health monitoring throughout the entire lifetime of the equipment and will eventually make onsite missions be an exception ... Flexible production in the factory of the future will require robots and autonomous handling systems to adapt faster to changing requirements. While classic programming and teaching of robots isn’t suitable for preparing the system to handle the huge and fast-growing number of different goods, future handling equipment will automatically learn through reinforcement learning and other AI techniques. The prerequisites—massive calculation power and huge amounts of data—have been established over the past years.


Runtime data no longer has to be vulnerable data

With all of these security advantages, you might think that CISOs would have quickly moved to protect their applications and data by implementing secure enclaves. But market adoption has been limited by a number of factors. First, using the secure enclave protection hardware requires a different instruction set, and applications must be re-written and recompiled to work. Each of the different proprietary implementations of enclave-enabling technologies requires its own re-write. In most cases, enterprise IT organizations can’t afford to stop and port their applications, and they certainly can’t afford to port them to four different platforms. In the case of legacy or commercial off-the-shelf software, rewriting applications is not even an option. While secure enclave technologies do a great job protecting memory, they don’t cover storage and network communications – resources upon which most applications depend. Another limiting factor has been the lack of market awareness. Server vendors and cloud providers have quickly embraced the new technology, but most IT organizations still may not know about them. 


Liquid Neural Network: What’s A Worm Got To Do With It?

Liquid networks make the model more robust by improving its resilience to unexpected and noisy data. For instance, it can make algorithms adjust to heavy rains that obscure a self-driving car’s vision. Liquid network makes the algorithm more interpretable. The network can help overcome the machine learning algorithms’ black-box nature because of the neurons’ expressive nature. The liquid network has performed better than other state-of-the-art time series by a few percentage points to predict future values in datasets used in atmospheric chemistry and traffic patterns. Apart from the high reliability, it also helped reduce computational costs. The researchers were aiming for fewer but richer nodes in the algorithm. In other words, the study focused on scaling down the network rather than scaling up. “This is a way forward for the future of robot control, natural language processing, video processing — any form of time series data processing,” said Ramin Hasani, the paper’s lead author. ... Tremendous progress has been made in developing smart bots that can perform multiple intelligent tasks like work alongside humans or give mental health advice. However, its adoption presents a significant concern in terms of safety and ethics.


Virtual Panel: The MicroProfile Influence on Microservices Frameworks

The term cloud-native is still a large gray area and it's concept is still under discussion. If you, for example, read ten articles and books on the subject, all these materials will describe a different concept. However, what these concepts have in common is the same objective - get the most out of technologies within the cloud computing model. MicroProfile popularized this discussion and created a place for companies and communities to bring successful and unsuccessful cases. In addition, it promotes good practices with APIs, such as MicroProfile Config and the third factor of The Twelve-Factor App. ... The use of reflection by the frameworks has its trade-offs. For example, at the application start and in-memory consumption, the framework usually invokes the inner class ReflectionData within Class.java. It is instantiated as type SoftReference, which demands a certain time to leave the memory. So, I feel that in the future, some frameworks will generate metadata with reflection and other frameworks will generate this type of information at compile time like the Annotation Processing API or similar. We can see this kind of evolution already happening in CDI Lite, for example. 


General Availability of the new PnP Framework library for automating SharePoint Online operations

Overtime the classic PnP Sites Core has grown into a hard to maintain code base which made us decide to start a major upgrade effort for all PnP .NET components. As a result of that, PnP Framework is a slimmed down version of PnP Sites Core dropping legacy pieces and dropping support for on-premises SharePoint in favor of improved quality and maintainability. If you’re still using PnP Sites Core with your on-premises SharePoint than that’s perfectly fine, we’re not going to pull these components but you’ll not see any updated versions going forward. PnP Framework is a first milestone in the upgrade of the PnP .NET components, in parallel we’re building a brand new PnP Core SDK using modern .NET development techniques focused on performance and quality (check our test coverage and documentation). Overtime we’ll implement more and more of the PnP Framework functionality in PnP Core SDK and then replace the internal implementation in PnP Framework. The modern pages API is good example: when you use that API in PnP Framework you’re actually using the implementation done in PnP Core SDK. Below picture gives an overview of our journey and the road ahead:


Endpoint Detection and Response: How Hackers Have Evolved

While kernel mode is the most elevated type of access, it does come with several drawbacks that complicate EDR effectiveness. In kernel mode, visibility can be quite limited as there are several data points only available in user mode. Also, third-party kernel-based drivers are often difficult to develop and if not properly vetted can lead to higher chances of system instability. The kernel is often regarded as the most fragile part of a system and any panics or errors in kernel mode code can cause huge problems, even crashing the system entirely. User mode is often more appealing to attackers as it has no way of directly accessing the underlying hardware. Code that runs in user mode must use API functions that interact with the hardware on behalf of the application, allowing for more stability and fewer system-wide crashes (as application crashes will not affect the system). As a result, applications that run in user mode need minimal privileges and are more stable. Suffice to say, a lot of EDR products rely heavily on user mode hooks over kernel mode, making things interesting for attackers. Since the hooks exist in user mode and hook into our processes, we have control over them. Since applications run within the user’s context, this means everything that's loaded into our process can be manipulated by the user in some form or another.


Continuous Delivery: Why You Need It and How to Get Started

For decades, enterprise software providers have focused on delivering large quarterly releases. "This system is slow because if there are any bugs in such a large release, developers have to sift through the deployed update in its entirety to find the problem to patch," said Eric Johnson, executive vice president of engineering for open-source code collaboration platform provider GitLab. Enterprises committed to CD rapidly deliver a string of highly granular releases. "This way, if there are any bugs in a new individual release they’re easily and swiftly addressed by developers' teams. Most developers appreciate CD because it helps them deliver higher quality work while limiting the risk of introducing unwanted change into production environments. CD ensures that the entire software delivery lifecycle from source control, to building and testing, to artifact release, and ultimately deployment into real environments, is automated and consistent, explained Brent Austin, director of engineering at Liberty Mutual Insurance. High levels of test automation are critical in CD, allowing developers to confidently introduce changes quickly with high confidence and higher quality. "CD also helps developers think in small batch sizes, which allows for easier and more effective rollback scenarios when issues are found and makes introducing change safer," Austin said.


Interview With a Russian Cybercriminal

Interacting with a ransomware operator is "unusual, but not that unusual," says Craig Williams, director of outreach for Cisco Talos. Of course, a key challenge in chatting with a criminal is knowing when to trust them. Researchers asked many questions they were able to verify, but there were scenarios in which they felt Aleks wasn't telling the whole story. Williams says the strongest example of this related to targeting the healthcare industry. "He pointed out how he didn't target healthcare customers … but then knew an awful lot about when healthcare paid, and in what situations they paid, and what type of data they have, and exactly how valuable it would be, and if they had insurance, they were more likely to pay," he explains. For example, Aleks reportedly told researchers hospitals pay 80% to 90% of the time. Aleks seems to choose victims based on their ability to pay quickly, Williams says, though the report notes the attacker's views may not represent those of LockBit group. For example, Aleks says the EU's General Data Protection Regulation (GDPR) may work in adversaries' favor. Victim companies are more likely to pay "quickly and quietly" so as to avoid penalties under GDPR.


The most important skills for successful AI deployments

As AI has bolstered the operations of more and more sectors, it’s become apparent that knowledge of the technology alone isn’t enough for deployments to succeed. Whether the AI solution is serving companies or individuals, the engineers behind the roll-out need to understand the business at hand. “The company needs people who know the principles of how these algorithms work, and how to train the machine, but can also understand the business domain and sector,” said Sanz-Saiz. Without this understanding, training an algorithm can be more complex. Any successful data scientist not only needs to bring technical expertise, but also needs to have domain and sector expertise as well.” Without sufficient industry knowledge, decision-making can become inaccurate, and in some cases, such as healthcare, it can also be dangerous. Companies such as Kheiron Medical have been using an AI solution to transform cancer screening, accelerating the process and minimising human error. For this to be effective, careful assessments and evaluations at every stage of the screening procedure need to be in place. “I think a commitment to clinical rigour needs to underpin everything that we do,” explained Sarah Kerruish, chief strategy officer at Kheiron.


Google’s New Approach To AutoML And Why It’s Gaining Traction

AutoML is an automated process of searching for a child program from a search space to maximise a reward. The researchers broke down the process into a sequence of symbolic operations. Meaning, a child program is turned into a symbolic child program. The symbolic program is further hyperified into a search space by replacing some of the fixed parts with to-be-determined specifications. During the search, the search space materialises into different child programs based on search algorithm decisions. It can also be rewritten into a super-program to apply complex search algorithms such as efficient NAS (ENAS). PyGlove is a general symbolic programming library on Python. Using this library, Python classes, as well as functions, can be made mutable through brief Python annotations, making it easier to write AutoML programs. The library also allows AutoML techniques to be quickly dropped into preexisting machine learning pipelines while benefiting open-ended research which requires extreme flexibility. PyGlove implements various popular search algorithms, such as PPO, Regularised Evolution and Random Search.



Quote for the day:

"If you can't handle others' disapproval, then leadership isn't for you." -- Miles Anthony Smith

Daily Tech Digest - February 03, 2021

Usability Testing: the Ultimate Guide [Free Checklist]

Generally speaking, usability testing comes in two types: moderated and unmoderated. Moderated sessions are guided by a researcher or a designer, while the unmoderated ones rely on users’ own unassisted efforts. Moderated tests are an excellent choice if you want to observe users interact with prototypes in real-time. This approach is more goal-oriented — it lets you confirm or disconfirm existing hypotheses with more confidence. On the other hand, unmoderated usability tests are convenient when working with a substantial pool of subjects. A large number of participants allows you to identify a broader spectrum of issues and points of view. However, it’s important to underline that testing isn’t that black and white. It’s best to look at this practice as a spectrum between moderated and unmoderated testing. Sometimes, during unmoderated sessions, we like to nudge our subjects into the right direction through mild moderation when necessary. Testing our prototypes can provide us with a wide array of insights. Fundamentally, it helps us spot flaws in our designs and identify potential solutions to the issues we’ve uncovered. We learn about the parts of our product that confuse or frustrate our users. By disregarding this step, we’re opening up to the possibility of releasing a product that causes too much friction.


Linux malware backdoors supercomputers

ESET researchers have reverse engineered this small, yet complex malware that is portable to many operating systems including Linux, BSD, Solaris, and possibly AIX and Windows. “We have named this malware Kobalos for its tiny code size and many tricks; in Greek mythology, a kobalos is a small, mischievous creature,” explains Marc-Etienne Léveillé, who investigated the malware. “It has to be said that this level of sophistication is only rarely seen in Linux malware.” Kobalos is a backdoor containing broad commands that don’t reveal the intent of the attackers. It grants remote access to the file system, provides the ability to spawn terminal sessions, and allows proxying connections to other Kobalos-infected servers, Léveillé notes. Any server compromised by Kobalos can be turned into a Command & Control (C&C) server by the operators sending a single command. As the C&C server IP addresses and ports are hardcoded into the executable, the operators can then generate new Kobalos samples that use this new C&C server. In addition, in most systems compromised by Kobalos, the client for secure communication (SSH) is compromised to steal credentials.


Disrupting the patent ecosystem with blockchain and AI

Applying the power of AI and blockchain to IP assets enables a paradigm shift in how IP is understood and managed. Companies that understand and adopt this new paradigm will be rewarded. Last year, we announced the inclusion of IPwe — the world’s first AI and blockchain-powered patent platform, among our selection of the next wave of enterprise blockchain business networks. The Paris-based start-up has since deployed a suite of leading-edge IP solutions, removing barriers by addressing fundamental issues within today’s patent ecosystem. IPwe is partnering with IBM to accelerate its mission to address the inefficiencies in the patent marketplace. IBM Cloud and IBM Blockchain teams are working closely with IPwe on a multi-year project to assist IPwe in its mission to deliver world class solutions to its enterprise, SME, university, law firms, research institutions and government customers, with a heavy emphasis on meeting the needs of financial, technology and risk management executives. In addition to giving patent owners tools that provide greater visibility, effective management, and ease of conducting transactions with patents, the IPwe Platform reduces costs for innovators, and creates commercial opportunities for those that wish to partner or engage in financial transactions.


Low-Code Platforms and the Rise of the Community Developer: Lots of Solutions, or Lots of Problems?

Most community developers will progress through three stages as they become more capable of using the low-code platform. Many community developers won’t progress beyond the first or second stage but some will go onto the third stage and build full-featured applications used throughout your business. Stage 1—UI Generation: Initially they will create applications with nice user interfaces with data that is keyed into the application. For example, they may make a meeting notes application that allows users to jointly add meeting notes as a meeting progresses. This is the UI Generation stage. Stage 2—Integration: As users gain experience, they’ll move to the second stage where they start pulling in data from external systems and data sources. For example, they’ll enhance their meeting notes application to pull calendar information from Outlook and email attendees after each meeting with a copy of the notes. This is the Integration stage. Stage 3—Transformation: And, finally, they’ll start creating applications that perform increasingly sophisticated transformations. For example, they may run the meeting notes through a machine learning model to tag and store the meeting content so that it can be searched by topic. This is the Transformation stage.

XOps: Real or Hype?

Like DevOps, the various types of Ops aim to accelerate processes and improve the quality of what they're delivering: software (DevOps); data (DataOps); AI models (MLOps); and analytics insights (AIOps). Some consider the different Ops types important since the expertise required for each type differs. Others believe it's just hype, specifically relabeling what already exists and/or there's a risk that the fragmentation created by all the different groups may create extra bureaucracy that frustrates faster value delivery. Agile software development practices have been bubbling up to the business for some time. Since the dawn of the millennium, business leaders have been told their companies need to be more agile just to stay competitive. Meanwhile, many agile software development teams have adopted DevOps and increasingly they've gone a step further by embracing continuous integration/continuous delivery (CI/CD) which automates additional tasks to enable an end-to-end pipeline which provides visibility throughout and smoother process flows than the traditional waterfall handoffs. Like DevOps, DataOps, MLOps, and AIOps are cross-functional endeavors focused on continuous improvement, efficiency and process improvement.


Sigma Rules to Live Your Best SOC Life

In the Security Operations space, we have been using SIEM's for many years with varying degrees of deployments, customization, and effectiveness. For the most part, they have been a helpful tool for Security Operations. But they can be better. Like any tool, they need to be sharpened and used correctly. After a while, even a sharpened tool can become dull from too much use: and with a SIEM that takes the form of too many events creating the dreaded ALERT FATIGUE!!! This is real for security operations and must be addressed; because the more alerts, the more an engineer must work on, and the more they will miss. Insert Sigma Rules for SIEMS (pun intended); a way for Security Operations to implement standardization into the daily tasks of building SIEM queries, managing logs, and threat hunting correlations. What is a Sigma rule, you may ask? A Sigma rule is a generic and open, YAML-based signature format that enables a security operations team to describe relevant log events in a flexible and standardized format. So, what does that mean for security operations? Standardization and Collaboration are now more possible than ever before with the adoption of Sigma Rules throughout the Security Operations community. 


How AI Is Radically Changing Cancer Prediction & Diagnosis

Risk modelling includes assessing risks at different time points, which can determine the preventive measures that need to be taken at different stages. This can provide insight into the risk of developing cancer at a time point compared to the other, which is not useful. Hence, scientists trained Mirai to have an ‘additive hazard layer’. This layer can predict a patient’s risk at a time point, let’s say four years, as an extension of the risk at a previous time point, say three years, instead of comparing two different time points. This can help the model learn to make self-consistent risk assessments even with variable amounts of follow-ups as inputs. Secondly, the model includes non-image risk factors like age and hormonal variables but does not necessarily require them at the test time, since a trained network can extract this information from mammograms. Hence, this model can be adopted globally. Lastly, standard training models do not work even with minor variations, such as a change in the mammography machine used. Mirai used an ‘adversarial’ scheme, to de-bias such models to learn from mammogram representations agnostic to the source clinical environment.


How To Port Your Web App To Microsoft Teams

While there are many different paths to building and deploying Teams apps, one of the easiest is to integrate your existing web apps with Teams through what is called “tabs.” Tabs are basically embedded web apps created using HTML, TypeScript (or JavaScript), client-side frameworks such as React, or any server-side framework such as .NET. Tabs allow you to surface content in your app by essentially embedding a web page in Teams using <iframes>. The application was specifically designed with this capability in mind, so you can integrate existing web apps to create custom experiences for yourself, your team, and your app users. One useful feature about integrating your web apps with Teams is that you can pretty much use the developer tools you’re likely already familiar with: Git, Node.js, npm, and Visual Studio Code. To expand your apps with additional capabilities, you can use specialized tools such as the Teams Yeoman generator command line tool or Teams Toolkit Visual Studio Code extension and the Microsoft Teams JavaScript client SDK. They allow you to retrieve additional information and enhance the content you display in your Teams tab.


How AI Can Read Your Brain Waves

The music study is only one of many recent efforts to understand what people are thinking using computers. The research could lead to technology that one day would help people with disabilities manipulate objects using their minds. For example, Elon Musk’s Neuralink project aims to produce a neural implant that allows you to carry a computer wherever you go. Tiny threads are inserted into areas of the brain that control movement. Each thread contains many electrodes and is connected to an implanted computer. "The initial goal of our technology will be to help people with paralysis to regain independence through the control of computers and mobile devices," according to the project’s website. "Our devices are designed to give people the ability to communicate more easily via text or speech synthesis, to follow their curiosity on the web, or to express their creativity through photography, art, or writing apps." Brain-machine interfaces might even one day help make video games more realistic. Gabe Newell, the co-founder and president of video game giant Valve, said recently that his company is trying to connect human brains to computers. The company is working to develop open-source brain-computer interface software, he said. 


Q&A: Dataiku VP discusses AI deployment in financial services

AI is also a real revolution within risk assessment, notably through the enhanced use of alternative data. This is true both for traditional risks and emerging risks such as climate change, helping all financial players — banks and insurers alike — to reconsider how they price risks. Those who have developed a strong expertise in leveraging alternative data and agile modeling have been able to truly benefit from their investment during the ongoing health crisis, which has deeply challenged traditional models. Lastly, the positive impact of AI on customers should not be underestimated. Financial services are confronted with an aggressive competitive landscape as well as demand from customers for improved personalisation, driving improved customer orientation in these organisations. The capacity to build 360° customer views and optimise customer journeys, notably on claims management, are two examples of areas where AI has significantly supported deep transformation within banks and insurance companies, with yet much more to be delivered.



Quote for the day:

"Leadership is a potent combination of strategy and character. But if you must be without one, be without the strategy." -- Norman Schwarzkopf

Daily Tech Digest - February 02, 2021

The Chaos Mindset: Teaching Your Code to Cope

Like Agile, chaos engineering is more than a set of activities and workflows—it’s also a state of mind. Your people and your culture must be ready and able to adopt chaos principles, as well as chaos processes. For the DevOps leader, adopting a new mindset might sound a little, well, vague. But this shift is based on concrete actions, not just philosophical musings. Consider an example from the world of cloud infrastructure: a mission-critical application that is hosted within a cloud service could be at risk for failure if, say, that cloud service is centralized in a single location, or within a limited number of microservices within the cloud infrastructure. But if the app is hosted in a distributed way, you can create greater opportunity for application-level availability and resilience, and you can test for that resilience within the existing production environment. This kind of distributed architecture isn’t brand-new for most enterprises, and, therefore, the process of developing applications in way that tests for availability in a variety of infrastructure scenarios also shouldn’t be a foreign concept. As a DevOps leader, you can build a culture of resilience-centric thinking by empowering your teams with the tools they need to adopt chaos-style testing, and then showing them how to build that thinking into every sprint and every standup.


Intel Outside: How The Chip Giant Lost Its Edge

For Intel, the year 2020 was a roller coaster ride. The company saw more lows than highs. If Apple delivered the much dreaded news to the company, its rivals— NVIDIA and AMD chipped in with more bad news with mega acquisitions and advancements in technology. Intel’s woes didn’t end there. Last year, rockstar chip architect Jim Keller, who was hired to put Intel on top again, resigned after a brief stint at the company; this is Keller’s shortest tenure compared to his time at Apple and Tesla. Then there was Chief Engineer Venkata Murthy Renduchintala, who promised in 2019, that the Intel’s next gen 7nm chips were on track to start production in 2021. That didn’t happen. Intel parted ways with Renduchintala as part of a technical team shake up. Constant engineering hiccups and internal debates of whether Intel needs to outsource manufacturing further delayed the arrival of next gen CPUs. The top brass of the company moving in and out also signals Intel’s leadership vulnerabilities. Current chief Bob Swan who will be replaced soon, was also only appointed a couple of years ago. Swan was tasked with restructuring the company to adjust to the disrupting technologies like AI and cloud.


North Korea-Sponsored Hackers Attack with Bad-Code Visual Studio Projects

Microsoft reported a battle with North Korean-sponsored hackers who attacked security researchers with a most innovative technique: compromised Visual Studio projects. The attack was attributed to a group called ZINC, said to be associated with the Democratic People's Republic of Korea (DPRK). A Jan. 28 post titled "ZINC attacks against security researchers" described the organization as a DPRK-affiliated and state-sponsored group. That determination was based on "observed tradecraft, infrastructure, malware patterns, and account affiliations." "This ongoing campaign was reported by Google’s Threat Analysis Group (TAG) earlier this week, capturing the browser-facing impact of this attack," Microsoft said. "By sharing additional details of the attack, we hope to raise awareness in the cybersecurity community about additional techniques used in this campaign and serve as a reminder to security professionals that they are high-value targets for attackers." While such battles between hackers and enterprises and security organizations are obviously common and ongoing, one unusual aspect of this encounter was the choice of payloads for the bad code.


AI Ethics Really Come Down To Security

Innovating trustworthy AI/ML depends on the design, development and distribution of AI systems that learn from and work collaboratively with humans in a comprehensive and meaningful fashion. It's critical for security and privacy to be considered at the start of any new technology's architecture. They cannot be properly included as an afterthought; the absolute highest required level of security and protection of data must be incorporated in both hardware and software, which will ensure that it is already configured into all steps of the development and supply chain — beginning with design all the way through to the technology's business and utilization model. The Charter of Trust initiative for IoT cybersecurity (of which we're a partner) has also provided excellent guidelines for a risk-based methodology and verification that should be incorporated as core requirements throughout that supply chain. After we identify the core principles that will govern AI development, we must then determine how to ensure these ethical AI systems are not compromised. Machine learning can monitor data and pinpoint anomalies, but it unfortunately also can be used by hackers to increase the impact of their actual cyberattacks.


Use social design to help your distributed team self-organize

For those on the front lines, a restructuring can feel more like something done to them than with them. Managers might overlook the experience and insights of those expected to innovate, collaborate, and satisfy customers within the new structure. And there is often an explicit or implicit power dynamic that distorts functional considerations as executives jostle for control of prominence and resources. An alternative to the top-down approach is to let function drive form, supporting those most directly connected to creating value for customers. Think of it as bottom-up or outside-in. One discipline useful in such efforts is social design, a subspecialty of design that aspires to solve complex human issues by supporting, facilitating, and empowering cultures and communities. Its practitioners design systems, not simply beautiful things. I spoke with one of the pioneers in this area, Cheryl Heller, author of The Intergalactic Design Guide: Harnessing the Creative Potential of Social Design. Her current work at Arizona State University centers on integrating design thinking and practice into functions that don't typically utilize design principles. “People’s work is often their only source of stability right now,” she told me. “You have to be careful, because people are brittle.” 


How-to improve Wi-Fi roaming

The initial tendency may be to install more APs in hopes of finding an easy fix, but doing so without careful analysis can make the situation even worse. Proper roaming requires more than just good signal strength throughout coverage areas; it takes a careful balance between the coverage of each AP on both 2.4 and 5GHz bands to make roaming work right. ... Getting the coverage overlap just right between all the APs in your network is one of the most important things you can do to help improve the roaming. At the same time, it is one of the toughest. You have to check the coverage throughout the coverage areas and analyze the overlapping. If issues are found you need to figure out how to address them, perform the fix, and then double-check that it’s actually fixed. Keep in mind you want about a 15% to 20% coverage overlap between AP cells, using -67dBm as the signal boundary for each cell. You want to look at both bands, too, keeping in mind 2.4GHz naturally provides longer range than 5GHz. Less overlap can result in spots with bad signals. If you have too much overlap between AP cells in either band, it can cause co-channel interference and “sticky” clients that don’t roam, which can result in APs that become overloaded with clients.


UK's leading AI startup and scaleup founders highlight the main pain points of running an AI business

Looking specifically at financial institutions, Hodgson says that they must ensure that their data foundations are fit for purpose. “Data is the raw material of our industry, and without it, the benefits and potential of AI are stunted and capped before the system even gets switched on. Many financial institutions already sit atop mountains of their own data in addition to buying more from vendors — yet they do not have the time, the resources or the staff expertise to sift through it,” Hodgson explains. Dr Richard Ahlfeld, founder and CEO at Monolith AI — a startup that builds new machine learning software to help engineers to improve the product development process, echoes this view. He says: “Any pain points tend to boil down to the data: getting the data, ensuring data security, making sure that you can trust the data. “There’s no standardisation of what makes data ‘valuable’ across the industry either, and not all engineers follow the same protocols and practices. For example, deciding what data to keep can be tricky as it’s hard to anticipate what might or might not be useful to have in the future. Even saving data from failed ventures (a practice which is often overlooked) can have its value, as it acts as a reference for future experiments.”


Ransomware payments are going down as more victims decide not to pay up

While it's positive that a higher percentage of these victims are choosing not to pay cyber criminals, there's still a large number of organisations that do give in – allowing ransomware to continue to be successful, even if those behind attacks have been making slightly less money. However, it might be enough for some ransomware operators to consider if the effort is worth it. "When fewer companies pay, regardless of the reason, it causes a long-term impact, that compounded over time can make a material difference in the volume of attacks," said a blog post by Coveware. The rise in organisations choosing not to give into extortion tactics around ransomware has also led the gangs to change their tactics, as shown by the increase in ransomware attacks where criminals threaten to leak stolen data if the victim doesn't pay. According to Coveware, these accounted for 70% of ransomware attacks in the final three months of 2020 – up from 50% during the previous three months. However, while almost three-quarters of organisations threatened with data being published between July and September paid ransoms, that dropped to 60% for organisations who fell victim between October and December.


Measuring Crop Health Using Deep Learning – Notes From Tiger Analytics

Agrochemical companies are already experimenting with advanced data science techniques to overcome these challenges: they employ drones to capture high-resolution aerial images of the farms and apply computer vision techniques and other complex algorithms to process the images. However, challenges persist; leaf characteristics such as orientation, alignment, length, shape and twists are difficult to discern when viewed from above, particularly in crops that grow tall and narrow, such as maise. Further complexities are introduced by variability in ambient light conditions, soil terrain, cloud refraction, occlusion and other environmental factors. Finally, all these factors vary over time, which means that to get a clear picture of plant health and treatment performance, regular measurement is required. As deep learning and computer vision fields mature, scientists are beginning to use these technologies for such LAI measurements, and more. Tiger Analytics has collaborated with leading agrochemical companies to develop such solutions. In this article, we outline the possible approaches and challenges. The primary challenge in developing a deep learning solution is the near nonexistence of training data.


Contemporising Data Protection Legislation

Provisioning blanket exemption to government agencies from the application of the data protection law and processing obligations (Section 35, PDP Bill) poses a challenge to reforming and upgrading the data access and surveillance regime. The importance of procedural safeguards, the right to effective recourse, and necessary and proportionate access principles has been reiterated by numerous Supreme Court judgments like PUCL v. Union of India and K.S. Puttaswamy v. Union of India. Such an exemption might inadvertently curtail the government’s stated vision of becoming the data processing and analytics hub of the world, and dent digital economy goals. According to the updated draft of the Standard Contractual Clauses (SCCs) by the European Commission on personal data transfers outside the European region, data exporters must take into account the laws and overall regime that enable public authorities to access personal data through binding requests in the destination country, and gauge if they meet “necessary and proportionate” requirements expected from a “democratic society”. If governments and businesses find the exemption under Section 35 of the PDP Bill excessive, digital trade and investments, and the ability to forge agreements, might be impacted.



Quote for the day:

"Trust is one of the greatest gifts that can be given and we should take creat care not to abuse it." --Gordon Tredgold

Daily Tech Digest - February 01, 2021

Welcome to the client-serverless revolution

As this trend intensifies, a new paradigm of connected internet applications has come to the forefront. This approach is known as client-serverless computing. It delivers consistently dynamic, interactive application experiences from any smartphone or edge device, no matter where a user happens to be, or where the resources they’re accessing are being served from. The widespread adoption of rich-client devices and the global availability of distributed cloud services has fueled the client-serverless computing trend even more, but it also demands more from developers. No longer can developers assume that their program code will primarily access databases, app servers, and web servers that are located within a single data center or cloud region. Instead, developers must build server-side business logic and markup, as well as the client-side JavaScript that will render the user interface on myriad client devices. They must code applications that are optimized for high-quality, browser-side interactivity over industry standard interfaces such as REST (for remote APIs) or JSON (for data formats). Client-serverless has roots in the old-guard, three-tier application architectures that sprung up around PCs and local area networks that connected a client-side GUI to a back-end SQL database.


Strengthening Zero-Trust Architecture

First, it's helpful to consider zero trust in terms of the need for controlled access management that does not negatively affect the business. Specifically, organizations must establish a zero-trust environment that limits access to individuals with the proper authority but doesn't interfere with daily operations. One way to accomplish this is through a data-trust lens. Rather than granting blanket access to validated users, organizations should hide specific files and data from those who don't have the authorization to access them, strengthening data protection beyond user-level permissions without impacting authorized users. By hiding objects like files, folders, or mapped network and cloud shares, attackers cannot find or access the data they seek. This function can serve as a powerful defense against data theft and ransomware attacks. Application trust likewise takes security beyond user privileges. Merely focusing on whether a query is authorized isn't enough — it's also vital to consider the application invoking that query. Doing so can prevent unauthorized access from applications such as Windows command line or PowerShell, which regular users wouldn't typically use to access data. Application trust can also help identify and deflect attackers attempting to probe open ports and services to compromise.


How can tech leaders take people with them on their digital transformation journey?

Leaders need to make it personal for their employees, make it clear that by introducing this new digital tool their life will become easier and their productivity more efficient. Leaders can look to do this by winning hearts and minds through demonstrations and simple, clear communication. If, for example, a business is introducing a new collaborative tool they need to make it clear how that will benefit employees. Will it reduce email traffic? Make instant communication more effective? Or free up more time in their day to focus on other priorities? Demonstrating these benefits will help to put people in the right mind-set from the start. It’s also important to ask for instant feedback on transformational change programmes. Ensuring people are involved from the start will promote engagement throughout the process and help leaders to understand how their employees feel about the change and impacts within their teams. Identify champions AND advocates Digital change champions are nothing new but are critical to support the roll out of digital transformation at the frontline of a business. These people can answer frequently asked questions, provide an additional avenue of communication to leaders and encourage employees to make best use of the new tools being made available to them.


AI No Silver Bullet for Cloud Security, but Here’s How It Can Help

One of the most promising – and certainly most developed – uses of AI in cybersecurity is to use AI systems to trawl through historical data in order to identify attack patterns. Some AI algorithms are very effective at this task, and can inform otherwise oblivious cybersecurity teams that they have, in fact, been hacked many times. The primary value of this kind of system is seen when it comes to managing employee access to systems and files. AI systems are extremely good at tracking what individual users are doing and at comparing this with what they do typically. This allows administrators (or automated security systems, explored below) to easily identify unusual activity and block users’ access to files or systems before any real damage is done. This kind of functionality is now widespread in many industries. Some cloud providers even ship it with their basic cloud storage systems. In many cases, in fact, an organization is not even aware that an AI is collecting data on the way they use their cloud service in order to scan this for unusual activity. This type of tool, however, also represents the limit of what AI can do, in terms of cloud security, at the moment. Most organizations lack the tools to use AI systems in a more complex way than this.


How do I select a PAM solution for my business?

Before choosing a PAM solution for their business, the first question a CISO should ask themselves is what it is that they aim to protect? Adopting PAM is as much about mindset and approach as it is about technology. Thousands of PAM programme engagements with the world’s largest organizations have cemented our view that the best way to protect the business is first to identify critical data and assets, then assess the paths that an attacker might take to compromise them. This sounds obvious but it is not yet the common practise that it should be. Privileges identities, credentials, secrets and accounts are found throughout IT infrastructure, whether this be on-premises, multi-cloud or a mix thereof. The ones that allow access to your critical data and assets are what the initial focus should be on. Once these are determined, there are a number of essential features that apply: Ease of implementation, ease of use, and ease of integration. The latter is essential. Look for integrations with your existing vendor stack; Cloud readiness is key. You are likely going to be moving applications into the cloud. Their privileged access needs to be secured; Session management and recording; Credential management for humans, applications, servers and machines; Audit and reporting features; and Privileged threat alerting.


Reported Data Breaches Rise 5% in Australia

The Office of the Australian Information Commissioner received 539 notifications between July and December, up from 512 in the first half of the year, according to its new report. Healthcare providers reported 133 breaches, followed by finance at 80; education, 40; legal, accounting and management services at 33; and the federal government at 33. This marked the first time the Australian government entered the top five list of sectors reporting the most breaches, displacing the insurance industry. The federal government’s breach tally does not include intelligence agencies or state and local government agencies, public hospitals and public schools. Under Australia’s notifiable data breaches law, organizations covered by the Privacy Act 1988 are required to report within 30 days breaches that are likely to result in “serious harm.” Fines for noncompliance can range up to 2.1 million Australian dollars ($1.6 million). The breach notification law went into effect in 2018 (see: Australia Enacts Mandatory Breach Notification Law). Although breach notifications increased by 5%, the OAIC characterized that as a “modest” increase given the rising cybersecurity risks introduced by the rapid shift in early 2020 to working from home due to the COVID-19 pandemic.


‘Weird new things are happening in software,’ says Stanford AI professor Chris Re

To handle the subtleties of which he spoke, Software 2.0, Re suggested, is laying out a path to turn AI into an engineering discipline, as he put it, one where there is a new systems approach, different from how software systems were built before, and an attention to new "failure modes" of AI, different from how software traditionally fails. It is a discipline, ultimately, he said, where engineers spend their time on more valuable things than tweaking hyper-parameters. Re's practical example was a system he built while he was at Apple, called Overton. Overton allows one to specify forms of data records and the tasks to be performed on them, such as search, at a high level, in a declarative fashion. Overton, as Re described it, is kind of an end-to-end workflow for deep learning. It preps data, it picks a model of neural net, tweaks its parameters, and deploys the program. Engineers spend their time "monitoring the quality and improving supervision," said Re, the emphasis being on "human understanding" rather than data structures. Overton, and another system, Ludwig, developed by Uber machine learning scientist Piero Molino, are examples of what can be called zero-code deep learning. "The key is what's not required here," Re said.


Hunting and anti-hunting groups locked in tit-for-tat row over data gathering

The data collection practices of the Hunting Office (HO), a central organisation delegated to run the administrative, advisory and supervisory functions of the UK’s hunting associations, and the Countryside Alliance (CA), a campaign organisation with over 100,000 members that promotes rural issues, have been questioned by activists running a website called Hunting Leaks. The website owners said that a monthly round-up of anti-hunting activity – which appears to have been shared via email with hunts across the UK – was passed on to Hunting Leaks by an undisclosed animal rights group. The leaked document, a report on saboteur activity between 14 November and 12 December 2020, lists the names of anti-hunting groups, the names of 30 activists (some of which are referred to multiple times) and information about their vehicles, including registration numbers. It also includes information on the number of anti-hunting activists in attendance, details about their movements and activity on a given hunt day, as well as guidance for how hunt members should approach collecting information and video footage. 


6 ways to bring your spiraling cloud costs under control

The best way to avoid overspending on cloud resources is to know what you need ahead of time. “Scalable cloud services, in theory, have made overprovisioning unnecessary, but old behaviors used in traditional data centers lead to [cloud] resources that are often underutilized or completely idle, which result in unnecessary spend,” wrote Gartner analysts in a December 2020 research note. This may not be music to the ears of anyone who has already made sizable commitments in the scramble to react to the challenges of the pandemic, but it does highlight the importance of right-sizing your cloud environment where possible. “Start with knowing what you spend—not just the invoice you get—but what are you spending on, where are you spending the most, and where are you seeing growth,” said Eugene Khvostov, vice president of product engineering at cost-management software specialist Apptio. For larger organizations, a proven approach is to establish a dedicated cloud center of excellence, tasked with monitoring and governing cloud usage and establishing best practices. For smaller organizations, this responsibility falls on senior members of the IT team, who will be tasked with establishing budgetary guardrails, often linked to longer-term ROI requirements.


Looking beyond Robotic Process Automation

There are a whole host of reasons why a process might not be suitable for automation, but you should consider things such as the time it will take to automate and how many steps in the process require human intervention. Generally speaking, the more logical and easier to define the process is, the faster and easier it is to automate. With a holistic view of processes in your organisation, you will be able to pinpoint which processes can and should be automated, as well as those where people are the key drivers. This will not only be crucial in achieving greater efficiencies, but in demonstrating the benefits to employees and an understanding of where they fit into this new way of working. Consider where upskilling or knowledge sharing might be needed to ensure employees are equipped to support automation. It’s all well and good having technology in in place, but it won’t run effectively without the right people and buy-in alongside it. The relationship between people and technology is going to become even more important as the capabilities of RPA and other machine-based learning advance over the next few years. Just because you can’t fully automate a process, doesn’t mean greater efficiencies can’t be achieved. For 



Quote for the day:

"Ninety percent of leadership is the ability to communicate something people want." -- Dianne Feinstein