Daily Tech Digest - July 24, 2021

Quantum entanglement-as-a-service: "The key technology" for unbreakable networks

Like classical networks, quantum networks require hardware-independent control plane software to manage data exchange between layers, allocate resources and control synchronization, the company said. "We want to be the Switzerland of quantum networking," said Jim Ricotta, Aliro CEO. Networked quantum computers are needed to run quantum applications such as physics-based secure communications and distributed quantum computing. "A unified control plane is one of several foundational technologies that Aliro is focused on as the first networking company of the quantum era," Ricotta said. "Receiving Air Force contracts to advance this core technology validates our approach and helps accelerate the time to market for this and other technologies needed to realize the potential of quantum communication." Entanglement is a physical phenomenon that involves tiny things such as individual photons or electrons, Ricotta said. When they are entangled, "then they become highly correlated" and appear together. It doesn't matter if they are hundreds of miles apart, he said.


Design for Your Strengths

Strengths and weaknesses are often mirrors of each other. My aerobic weakness had, as its inverse, a superstrength of anaerobic power. Indeed, these two attributes often go hand in hand. Finally, I had figured out how to put this to use. After the Lillehammer Olympics, I dropped out of the training camp. But I was more dedicated than ever to skating. I moved to Milwaukee, and without the financial or logistical support of the Olympic Committee, began a regimen of work, business school, and self-guided athletic training. I woke up every day at 6 a.m. and went to the rink. There I put on my pads and blocks and skated from 7 until 9:30. Then I changed into a suit for my part-time job as an engineer. At 3 p.m., I left work in Milwaukee and drove to the Kellogg Business School at Northwestern, a two-hour drive. I had class from 6 to 9 p.m., usually arrived home at 11, and lifted weights until midnight. I did that every day for two and a half years. Many people assume that being an Olympic athlete requires a lot of discipline. But in my experience, the discipline is only physical. 


‘Next Normal’ Approaching: Advice From Three Business Leaders On Navigating The Road Ahead

With some analysts predicting a "turnover tsunami" on the horizon, talent strategy has taken on a new sense of urgency. Lindsey Slaby, Founding Principal of marketing strategy consultancy Sunday Dinner, focuses on building stronger marketing organizations. She shares: Organizations are accelerating growth by attracting new talent muscle and re-skilling their existing teams. A rigorous approach to talent has never been as important as it is right now. The relationship between employer and employee has undergone significant recalibration the last year with the long-term impact of the nation’s largest work-from-home experiment yet to come into clear view. But much like the Before Times, perhaps the greatest indicator of how an organization will fare on the talent front comes down to how it invests in its people and specifically their future potential. Slaby believes there is a core ingredient to any winning talent strategy: Successful organizations prioritize learning and development. Training to anticipate the pace of change is essential. It is imperative that marketers practice ‘strategy by doing’ and understand the underlying technology that fuels their go-to-market approach.


The Beauty of Edge Computing

The volume and velocity of data generated at the edge is a primary factor that will impact how developers allocate resources at the edge and in the cloud. “A major impact I see is how enterprises will manage their cloud storage because it’s impractical to save the large amounts of data that the Edge creates directly to the cloud,” says Will Kelly, technical marketing manager for a container security startup (@willkelly). “Edge computing is going to shake up cloud financial models so let’s hope enterprises have access to a cloud economist or solution architect who can tackle that challenge for them.” With billions of industrial and consumer IoT devices being deployed, managing the data is an essential consideration in any edge-to-cloud strategy. “Advanced consumer applications such as streaming multiplayer games, digital assistants and autonomous vehicle networks demand low latency data so it is important to consider the tremendous efficiencies achieved by keeping data physically close to where it is consumed,” says Scott Schober, President/CEO of Berkeley Varitronics Systems, Inc. (@ScottBVS).


Facebook Makes a Big Leap to MySQL 8

The company skipped entirely upgrading to MySQL 5.7 release, the major release between 5.6 and 8.0. At the time, Facebook was building its custom storage engine, called MyRocks, for MySQL and didn’t want to interrupt the implementation process, the engineers write. MyRocks is a MySQL adaptation for RocksDB, a storage engine optimized for fast write performance that Instagram built for to optimize Cassandra. Facebook itself was using MyRocks to power its “user database service tier,” but would require some features in MySQL 8.0 to fully support such optimizations. Skipping over version 5.7, however, complicated the upgrade process. “Skipping a major version like 5.7 introduced problems, which our migration needed to solve,” the engineers admitted in the blog post. Servers could not simply be upgraded in place. They had to use logical dump to capture the data and rebuild the database servers from scratch — work that took several days in some instances. API changes from 5.6 to 8.0 also had to be rooted out, and supporting two major versions within a single replica set is just plain tricky. 


Research shows AI is often biased. Here's how to make algorithms work for all of us

Inclusive design emphasizes inclusion in the design process. The AI product should be designed with consideration for diverse groups such as gender, race, class, and culture. Foreseeability is about predicting the impact the AI system will have right now and over time. Recent research published by the Journal of the American Medical Association (JAMA) reviewed more than 70 academic publications based on the diagnostic prowess of doctors against digital doppelgangers across several areas of clinical medicine. A lot of the data used in training the algorithms came from only three states: Massachusetts, California and New York. Will the algorithm generalize well to a wider population? A lot of researchers are worried about algorithms for skin-cancer detection. Most of them do not perform well in detecting skin cancer for darker skin because they were trained primarily on light-skinned individuals. The developers of the skin-cancer detection model didn't apply principles of inclusive design in the development of their models.


Leverage the Cloud to Help Consolidate On-Prem Systems

The recommended approach is to "create or recreate" a representation of the final target system in-the-cloud, but not re-engineer any components into cloud-native equivalents. The same number of LPARs, same memory/disk/CPU allocations, same file system structures, same exact IP addresses, same exact hostnames, and network subnets are created in the cloud that represents as much as possible a "clone" of the eventual system of record that will exist on-prem. The benefit of this approach is that you can apply "cloud flexibility" to what was historically a "cloud stubborn" system. Fast cloning, ephemeral longevity, software-defined networking, API automation can all be applied to the temporary stand-in running in the cloud. As design principles are finalized based on research performed on the cloud version of the system, those findings can be applied to the on-prem final buildout. To jump-start the cloud build-out process, it is possible to reuse existing on-prem assets as the foundation for components built in the cloud. LPARs in the cloud can be based on existing mksysb images already created on-prem. 


Scaling API management for long-term growth

To manage the complexity of the API ecosystem, organisations are embracing API management tools to identify, control, secure and monitor API use in their existing applications and services. Having visibility and control of API consumption provides a solid foundation for expanding API provision, discovery, adoption and monetisation. Many organizations start with an in-house developed API management approach. However, as their API management strategies mature, they often find the increasing complexity of maintaining and monitoring the usage of APIs, and the components of their API management solution itself, a drain on technical resources and a source of technical debt. A common challenge for API management approaches is becoming a victim of one’s own success. For instance, a company that deploys an API management solution for a region or department may quickly get requests for access from other teams seeking to benefit from the value delivered, such as API discoverability and higher service reliability. While this demand should be seen as proof of a great approach to digitalization, it adds challenges and raises questions for example around capacity, access control, administration rights and governance.


Is Blockchain the Ultimate Cybersecurity Solution for My Applications?

Blockchain can provide a strong and effective solution for securing networked ledgers. However, it does not guarantee the security of individual participants or eliminate the need to follow other cybersecurity best practices. Blockchain application depends on external data or other at-risk resources; thus, it cannot be a panacea. The blockchain implementation code and the environments in which the blockchain technology run must be checked for cyber vulnerabilities. Blockchain technology provides stronger, transactional security than traditional, centralized computing services for secured networked transaction ledger. For example, say I use distributed ledger technology (DLT), an intrinsic blockchain feature, while creating my blockchain-based application. DLT increases cyberresiliency because it creates a situation where there is no single point of contact. In the DLT, an attack on one or a small number of participants does not affect other nodes. Thus, DLT helps maintain transparency and availability, and continue the transactions. Another advantage of DLT is that endpoint vulnerabilities are addressed.


Why bigger isn’t always better in banking

Of course, there are outliers. One of them is Brown Brothers Harriman, a merchant/investment bank that traces its origins back some 200 years, and that is the subject of an engaging new book, Inside Money. Historian Zachary Karabell (disclosure: we were graduate school classmates in the last millennium) offers not just an intriguing family and personal history, but a lesson in how to balance risk and ambition against responsibility and longevity—and in why bigger isn’t always better. The firm’s survival is even more remarkable given that US financial history often reads as a string of booms, bubbles, busts, and bailouts. The Panic of 1837. The Panic of 1857. The Civil War. The Panic of 1907. The Great Depression. The Great Recession of 2008. In finance, leverage—i.e., debt—is the force that allows companies to lift more than they could under their own power. It’s also the force that can crush them when circumstances change. And Brown Brothers has thrived in part by avoiding excessive leverage. Today, the bank primarily “acts as a custodian for trillions of dollars of global assets,” Karabell writes. “Its culture revolves around service.”



Quote for the day:

"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek

Daily Tech Digest - July 23, 2021

The CISO: the enabler of innovation

With digital transformation already in focus for many businesses, adding a now distributed workforce on top of this scenario ratchets up the security challenge. One in five CEOs and CISOs saw a major increase in all types of cyber attacks since COVID-19, with supply chain attacks topping the table side by side with ransomware. The key here is to enable and drive businesses, rather than impede them. By moving to support remote workers by adjusting policies and controls discreetly, businesses can enable teams to work better in their own role in their own job. This means allowing them to access data from anywhere while providing better visibility 24/7, enabling more proactive alerts and controls. In fact, 58% of CEOs and CISOs have recognised the need for a more integrated trust framework, with 48% also substantially increasing the use of cloud-based cyber security systems. In the future, the workforce will have even more autonomy within the decentralised cultures that develop as business leaders find new ways to drive collaboration and creativity. For the CISO, this means continuous adapting to an evolving workplace.


Why unstructured data is the future of data management

Today, data is a valuable corporate asset. You’ve got to be strategic with it because it’s not just for your BI teams, but for the R&D and customer success teams. They need historical data to build new products or to improve the ones they already have. This is super relevant in manufacturing, such as in the semiconductor chip industry, but also in other industries that are so important to our economy, such as pharmaceuticals. COVID researchers depended upon access to SARS data when developing vaccines and treatments. Data often becomes valuable again later, and what if you don’t know what you have or you can’t find it? We’ve had customers in the media and entertainment business, and in the past when they wanted to find an old show, they’d need access to a tape archive. Then, they needed an asset tag to locate the tape. That can be very difficult, and it’s why archiving is not popular. Live archive solutions that are available today make archived data instantly accessible and transparently tier data so users can easily locate files and access them anytime.


Here’s how to check your phone for Pegasus spyware using Amnesty’s tool

The first thing to note is the tool is command line or terminal based, so it will take either some amount of technical skill or a bit of patience to run. We try to cover a lot of what you need to know to get up and running here, but it’s something to know before jumping in. The second note is that the analysis Amnesty is running seems to work best for iOS devices. In its documentation, Amnesty says the analysis its tool can run on Android phone backups is limited, but the tool can still check for potentially malicious SMS messages and APKs. Again, we recommend following its instructions. ... If you’re using a Mac to run the check, you’ll first need to install both Xcode, which can be downloaded from the App Store, and Python3 before you can install and run mvt. The easiest way to obtain Python3 is using a program called Homebrew, which can be installed and run from the Terminal. After installing these, you’ll be ready to run through Amnesty’s iOS instructions. If you run into issues while trying to decrypt your backup, you’re not alone. The tool was giving me errors when I tried to point it to my backup, which was in the default folder. 


Critical Jira Flaw in Atlassian Could Lead to RCE

The vulnerability has to do with a missing authentication check in Jira’s implementation of Ehcache, which is an open-source, Java distributed cache for general-purpose caching, Java EE and lightweight containers that’s used for performance and which simplifies scalability. Atlassian said that the bug was introduced in version 6.3.0 of Jira Data Center, Jira Core Data Center, Jira Software Data Center and Jira Service Management Data Center (known as Jira Service Desk prior to 4.14). According to Atlassian’s security advisory, that list of products exposed a Ehcache remote method invocation (RMI) network service that attackers – who can connect to the service on port 40001 and potentially 40011 – could use to “execute arbitrary code of their choice in Jira” through deserialization, due to missing authentication. RMI is an API that acts as a mechanism to enable remote communication between programs written in Java. It allows an object residing in one Java virtual machine (JVM) to invoke an object running on another JVM; Often, it involves one program on a server and one on a client. ...”


Improving Your Productivity With Dynamic Problems

First, a Huffman code tree is built. Let the original alphabet consist of n characters, the i-th of which occurs pi times in the input text. Initially, all symbols are considered active nodes of the future tree, the i-th node is marked with pi. At each step, we take two active vertices with the smallest labels, create a new vertex, labeling it with the sum of the labels of these vertices, and make it their parent. The new vertex becomes active, and its two children are removed from the list of active vertices. The process is repeated many times until only one active vertex remains, which is assumed to be the root of the tree. Note that the symbols of the alphabet are represented by the leaves of this tree. For each leaf (symbol), the length of its Huffman code is equal to the length of the path from the root of the tree to it. The code itself is constructed as follows: for each internal vertex of the tree, consider two arcs going from it to the children. We assign the label 0 to one of the arcs, and to the other 1. The code of each symbol is a sequence of zeros and ones on the path from the root to the leaf.


Top 5 NCSC Cloud Security Principles for Compliance

Modern business IT infrastructures are complex, and data regularly moves between different across the network. It’s critical to protect sensitive data belonging to your customers and employees as it traverses between business applications/devices and the cloud. It’s also imperative that your cloud vendor protects data in transit inside the cloud such as when data is replicated to a different region to ensure high availability. ... Different regulations have different requirements about where protected data can be stored. For example, some regulations stipulate that data can only be transferred to companies with sufficient levels of protection in processing personal data. If your business opts for a cloud provider that doesn’t provide transparency over the location of data, you could end up unknowingly in breach of regulations. ... The last thing your business wants is to use a public cloud service only to find that a malicious hacker accessed your sensitive data by compromising another customer first. This type of concerning non-compliance scenario can happen when there is an insufficient separation between different customers of a cloud service.


Data and Analytics Salaries Heat Up in Recovery Economy

There are a few reasons why the market is really strong for data scientist and analytics pros right now. First, we are coming off a period of stagnation where no one wanted to change jobs and salaries stayed the same. That means those individuals who were considering a job change most likely put those plans on hold during the pandemic. Now all those people are getting back into the market. Second, there are so many new remote job opportunities, which opens up a whole new realm of job possibilities for data science and analytics pros. Third, as people move on to new jobs, they create vacancies where they were, opening up additional job vacancies. Fourth, there are some industries that had to change their business models to continue to operate during the pandemic economy. Burtch Works specifically points to retail, which had to enable digital channels to replace sales lost in brick-and-mortar stores. The Burtch Works report notes that many retailers have been expanding their data science and analytics teams and offering higher compensation than Burtch Works has typically seen in retail.


Home-office networks demand better monitoring tools

Networking professionals said they are enhancing their network operations toolsets in three primary ways. First, 54.2% are looking for tools that deliver security-related insights into home-office environments. This will help them collaborate with security teams to ensure that their increasing distributed networks are compliant with policies. It will also help them discern whether a user-experience issue is related to a security problem. Second, 52.6% need new dashboard and reporting features that allow them to focus on home offices and remote workers, which will help admins and engineers spot problems and troubleshoot them more efficiently. If their existing tools lack adequate dashboard and reporting customization, they’ll have to look elsewhere for this view into their networks. Third, 49.4% need to upgrade the scalability of their tools. ... Network teams will need to integrate their tools with other systems to improve their ability to support home workers. For instance, 43% said home-office monitoring requirements are producing a need for their monitoring tools to integrate with their SD-WAN or secure access service edge (SASE) solution. 


Hybrid work: 7 ways to enable asynchronous collaboration

One of the main differences between asynchronous and synchronous work is that the former tends to center on time- or task-defied work processes. “Asynchronous work requires a grasp of what the outcome – the final product of work – needs to be, as opposed to the amount of time spent in close coordination producing the final product,” says Dee Anthony, director at global technology research and advisory firm ISG. IT leaders need to get better at defining, managing, and measuring outcomes. Anthony suggests taking a page out of the agile playbook: Identify outcomes, estimate the effort required to accomplish them, track work velocity, and perform regular reviews. You must also foster a culture of trust. “Having people work across time, even in the same country, means that the old nine-to-five is out the window,” says Iain Fisher, director at ISG. "Managers cannot be there all the time, so a culture change of trust and respect must evolve." ... “Working asynchronously requires very strong written communication skills to avoid ambiguity and misunderstanding,” says Lars Hyland


Outcome Mapping - How to Collaborate With Clarity

The Outcome Map is an excellent way to create energetic communication, clarity, and alignment from the start (or re-start) of any initiative. It also reminds you to stay on track as you progress, and how to know when we’re drifting from the path. By adding measurements and methods, you can describe where you want to go and how you plan to get there. In both a project and product approach, clarity of outcomes is critical, but what’s often forgotten are the factors affecting the odds of achieving the outcome. Outcome mapping allows us to explore, anticipate, and design mitigation approaches to factors impacting our desired outcome. For this reason, it’s also commonly referred to as impact mapping. In practice, you can map many factors involved in a given outcome, but a few critical ingredients should always be present. Defining measures (or indicators) of progress (summarized as ‘Measures’ in the map itself) allows you to measure and celebrate progress without waiting until the distant deadline of your primary outcome to find out if you’ve succeeded or failed.



Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." -- William Pollard

Daily Tech Digest - July 22, 2021

How SMEs in e-commerce can drive value from machine learning

It’s important to realise that implementing machine learning in processes like customer segmentation means digging deeper into data than ever before, and ensuring the algorithms your business uses are underpinned by a thorough understanding of this data. Simply taking superficially similar customers and grouping them together when recommending products won’t go far enough for it to work successfully. The next step is to ensure the business is compatible with machine learning in the long run. For example, business problems where machine learning could be useful should be identified early on, and companies should get into the habit of preparing their data so that machine learning can be integrated without too much difficulty and disruption. Crucially, organisations should also identify relevant machine learning experts who can drive such projects forward, either internally or through outsourcing via external consultants. Finally, one of the most pressing concerns in the minds of many business leaders reluctant to implement machine learning is the threat the technology could pose to human staff.


Are you ready for the newest era of DevSecOps?

Many organizations have shifted security left, or at least started on their journey, in an effort to improve development velocity while also managing security risks. When starting with their incumbent tools, many organizations find it difficult to cobble together a variety of different security scanners and trying to integrate them into a complex DevOps toolchain. We hear from customers that siloed tooling has hindered collaboration. Many of our customers turned to GitLab to simplify their DevSecOps process. GitLab is often at the forefront of the DevSecOps and "shift security left" conversations among developers and businesses because of the simplicity and effectiveness of embracing security capabilities via a single platform. Developers need to find and fix vulnerabilities within their natural workflow earlier, without friction or distractions, while businesses must protect their IP in an age when the stakes of security have never been higher. When security capabilities are embedded into the end-to-end software processes, then developers can spend time writing code instead of managing tools. 


Kubernetes Cloud Clusters Face Cyberattacks via Argo Workflows

Researchers said the misconfigurations can also expose sensitive information such as code, credentials and private container-image names (which can be used to assist in other kinds of attacks). Intezer’s scan of the web found scads of unprotected instances, operated by companies in several industries, including technology, finance and logistics. “We have identified infected nodes and there is the potential for larger-scale attacks due to hundreds of misconfigured deployments,” according to Intezer. In one case, bad code was running on an exposed cluster in Docker Hub for nine months before being discovered and removed. Attacks aren’t difficult to carry out: Researchers observed different popular Monero-mining malware being housed in containers located in repositories like Docker Hub, including Kannix and XMRig. Cybercriminals need only to pull one of those containers into Kubernetes via Argo or another avenue. For instance, Microsoft recently flagged a wave of miners infesting Kubernetes via the Kubeflow framework for running machine-learning workflows.


AI execs unpack call center automation boom

Purkayastha says that technological improvements over the past five years have set the stage for the wider adoption of automation in the call center. Superior automatic speech recognition and transcription are accelerating the velocity of deploying solutions, while knowledge graphs — knowledge bases with graph-structured data models — are extracting information pertinent to support agents. Beyond this, automation technologies now better understand the semantics of conversations and continuously learn, optimizing toward business KPIs. Of course, these systems require data to train, and accumulating the data — along with processing, normalizing, and cleaning it — can take time. Schebella says that it’s not unusual for 30, 60, or 90 days to elapse before a natural language processing model begins to perform satisfactorily. In the future, he expects data collection to become less of a problem as call automation technologies provide more real-time feedback — for example, indicating to a customer service agent whether they’re speaking too quickly or slowly. 


A Guide to Stress-Free Cybersecurity for Lean IT Security Teams

Today's cybersecurity landscape is enough to make any security team concerned. The rapid evolution and increased danger of attack tactics have put even the largest corporations and governments at heightened risk. If the most elite security teams can't prevent these attacks from happening, what can lean security teams look forward to? Surprisingly, leaner teams have a much greater chance than they think. It might seem counterintuitive, but recent history has shown that large numbers and huge budgets aren't the difference-makers they once were. Indeed, having the right strategy in place is a clear indicator of an organization's success today. A new guide by XDR provider Cynet looks to dispel the myth that bigger is always better and shows a smarter way forward for lean IT security teams. The new guide focuses on helping lean IT security teams plan strategies that can protect their organizations while reducing the level of stress they face. Due to the rise of cyber tools that can help level the playing field and a new generation of security professionals, smaller organizations can now defend their organizations equally.


4 Patterns for Microservices Architecture in Couchbase

One of the key characteristics of microservices is their loose coupling, so that they can be developed, deployed, access-controlled and scaled on an individual basis. Loose coupling requires that the underlying database infrastructure supports isolating the data for the individual microservices. That could be either by running individual database instances per microservice or by controlling access to the relevant parts of the data. While traditional relational databases support isolation using database schemas, they are often difficult to scale, they lack the flexibility of a JSON data model, and most importantly, they become the single point of failure in case of an outage of your database infrastructure. This is an important aspect to consider when designing your microservice architecture, as an outage has severe consequences for all microservices sharing the same database. Couchbase is designed for microservices. It’s a highly scalable, resilient and distributed database. It offers great flexibility and provides multiple levels of isolation to support up to one thousand microservices in the same Couchbase cluster.


Moving OT to the cloud means accounting for a whole new host of security risks

In addition to using attacks that all cloud platforms are vulnerable to, Team82 said one of its approaches involves gaining unauthorized access to an operator account "using different methods." Again, these different methods are likely similar to other attacks used to steal credentials, like phishing, which has been on the rise as more organizations move to cloud-based models to enable remote work. Team82 detailed two different approaches to gaining access to OT networks and hardware: A top-down approach that involves gaining access to a privileged account and thus a cloud dashboard, and a bottom-up approach that starts by attacking an endpoint device like a PLC from which they can execute malicious remote code. Regardless of the method, the end result for the attacker is the same: Access to, and control of, an OT cloud management platform and the ability to disrupt devices and businesses. An attacker could stop a PLC program responsible for temperature regulation of the production line, or change centrifuge speeds as was the case with Stuxnet.


Why Going Digital Isn’t Enough to Meet the New Customer Experience (CX) Imperative

Traditional silos are directed by functional leaders—service, marketing, commerce—but customers expect a unified approach to CX. Building a customer-centered organization requires operational innovation, and existing models don’t scale. CDOs, CMOs, CIOs, and CxOs—supported by CEOs, CFOs, COOs, and board members—must build an alliance: a working group or steering committee that is responsible and accountable for centralized, unified, and collaborative customer understanding and engagement. Ultimately, a customer-centered organization needs a leader who is probably not the chief executive officer but a chief experience officer: an orchestrator with day-to-day leadership, accountability, and tireless focus on the personal touch in a reimagined analog, digital, and hybrid customer journey. It takes day-to-day leadership, accountability, and tireless focus. Companies leading in CX are more than twice as likely to have a chief experience officer than those that have made less progress.


The role of tech in the future of keeping the workforce well post-pandemic

The bigger picture is that a ‘return to work’ doesn’t mean back to the office. It might not even mean remote working. The talk of the rise of the ‘third workplace’, where employees can work from wherever they choose, means that a modern day workforce needs a completely mobile infrastructure. So what does this look like? Firstly, using an integrated company news feed as part of your communications platform allows remote workers to cut past the often-laborious task of checking their emails and get to the priorities of the day. While emails can be easily overlooked, a news feed that highlights urgent issues and offers real-time updates which remote workers can receive across different channels helps boost a culture of openness and inclusion. Having the tools to communicate health and safety updates results in transparency around important matters like the risk of transmission and the safety measures implemented. A key question for organisations post-pandemic has to be how they leverage tech beyond workforce optimisation. 


Questions that help CISOs and boards have each other’s back

An accountability approach should dictate who takes ownership of what. The vice president of human resources is responsible for organizing vetting; the chief information officer must be held responsible for IT security; and the chief financial officer must have plans for combating many forms of fraud, which include strategies for combating phishing and business email compromise, scenarios for handling ransomware attacks and efforts to harden the tools and processes utilized by accounts payable. The deeper you follow the accountability way of thinking, the more inclusive your leadership must be when it comes cybersecurity. This can’t be a lone-wolf operation. The purpose of a security team is to become an ally for your executive team, not to passivate them. A proper security leader must determine—and share with the CEO and the board of directors, if necessary—whether the responsible persons are up to their tasks and committed to reaching security objectives.



Quote for the day:

"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis

Daily Tech Digest - July 21, 2021

Two-for-Tuesday vulnerabilities send Windows and Linux users scrambling

The world woke up on Tuesday to two new vulnerabilities—one in Windows and the other in Linux—that allow hackers with a toehold in a vulnerable system to bypass OS security restrictions and access sensitive resources. As operating systems and applications become harder to hack, successful attacks typically require two or more vulnerabilities. One vulnerability allows the attacker access to low-privileged OS resources, where code can be executed or sensitive data can be read. A second vulnerability elevates that code execution or file access to OS resources reserved for password storage or other sensitive operations. The value of so-called local privilege escalation vulnerabilities, accordingly, has increased in recent years. The Windows vulnerability came to light by accident on Monday when a researcher observed what he believed was a coding regression in a beta version of the upcoming Windows 11. The researcher found that the contents of the security account manager—the database that stores user accounts and security descriptors for users on the local computer—could be read by users with limited system privileges.


Establishing the right analytics-based maintenance strategy

Although predictive maintenance is often held up as a prime example of the value that IoT and advanced analytics can generate, in fact, any predictions in the real world are imperfect. Our research shows that some organizations, even with highly qualified AA teams, are unlikely to realize the desired impact. The AA algorithm employed may fail to predict a breakdown, giving a false negative, and in other cases can predict an event that would not have happened, giving a false positive. Although much effort is often put into minimizing false negatives, it is often the false positives that make predictive maintenance less viable. Make no mistake, predictive maintenance can be very valuable. In situations with very high cost or safety issues associated with a breakdown, such as the midair failure of a jet turbine, operators need the closest estimate possible of when a breakdown might occur. In addition, in cases in which failures are highly predictable and well-understood—and the chance of a false positive is therefore minimal or very low-cost—predictive maintenance is well worth the expense.


Politicization and stigmatization won’t solve cyber security concerns: Chinese Mission to the EU and embassies

Slamming the EU and NATO's allegations, spokesperson of the Chinese Mission to the EU said that the statements were not based on facts, but speculation and groundless accusations. He added that China has always been a firm defender of cyber security and has cracked down on cyber attacks launched within China or using Chinese cyber facilities. "For years, certain countries in the West have abused their technological advantages for massive and indiscriminate eavesdropping across the world, even on its close allies. At the same time, they have boasted themselves as the guardians of cyber security. They push around their allies to form small circles and repeatedly smear and attack other countries on cyber security issues," the Mission said. Such practices fully expose the West's hypocrisy, it added. The Mission said it will follow closely NATO's attempts to break its geographical constraints under the guise of cyber security to make false accusations against China. Over the years, China has been a major victim of cyber attacks. 


Old Agile vs New Agile

Agile 2 is new in that it aggregates the ideas of these new thinkers, and integrates these ideas into a cohesive system of thought, while adding missing pieces. Agile 2 interprets these many writings and translates them into a common and holistically integrated shared narrative. But what is that narrative? Agile 2 is complex because humans are complex. It is not a set of bumper sticker maxims asserted without supporting explanation and rationale. Agile 2 is nuanced and broad, and is published with the thought that went into it. But I will summarize it, to give you a sense. Agile 2 is defined by its Values and Principles. Most of those principles could be summarized as described here. Basically, Agile 2 says that extremes don’t usually work well, and that judgment is called for when applying any practice. It also emphasizes the critical importance of having the right kinds of leadership for each situation. Note that “kinds of leadership” is plural. Agile 2 favors emergent leadership and autonomy, but it views those as aspirations rather than assumptions, and includes the theory that senior leaders need to be intentional about the kinds of leadership needed within their organization ...


Google advances ‘invisible’ cloud security with intrusion detection, analytics and more

Google’s new Cloud IDS offering epitomizes that vision. Announced in preview today, Cloud IDS is said to be a cloud-native, managed intrusion detection system that enterprises can deploy in just a few clicks in order to protect themselves against malware, spyware, command-and-control attacks and other network-based threats, Potti said. Google worked closely with Palo Alto Networks Inc. to develop Cloud IDS. The system incorporates that company’s advanced threat detection technologies to detect malicious network activity with very low false positives. It’s essentially a managed version of Palo Alto’s threat detection services, available in Google Cloud, where scaling, availability and updates are all automated. Google Cloud IDS stands out for its flexibility, the company says. It can easily be integrated with third-party security information and event management and security orchestration, automation and response platforms, enabling users to both investigate and automatically respond to any alerts, Potti said. 


Advanced Technology Outcomes: Humans Vs. Machine Or Human With Machine?

There is no doubt that we humans have always benefited from machines and also that we have the power to turn them off when required. But now the situation has turned around. The increasing issue is the vital role played by machines both as a single unit and collectively as infrastructures. This means humans no longer have the option to shut the machines off. In the health sector as well machines are evolving at a faster rate. Surgery is becoming robotized and medical diagnostics has become dependent on machines. Even there are automated machines that are manufacturing drugs. Therefore, pulling the plug off will result in terrible consequences for thousands of people worldwide. Besides all this, we are making use of machines as an extension of ourselves and applying them as stronger, faster, and cheaper hands. And, because of this, we still win over the machines but it is on us to make accurate decisions for the upcoming future. We are continuously getting engaged with machines. We use smartphones to show routes, to reach a destination, to look for recipes, even we use smartphones to check our health and the list is increasing rapidly.


Bringing Your Factory to the Edge in 2021

Is your factory living in the dark ages? Are you constantly checking manual reports to see your production scores? Do you wish that you could check your factory health on your smart device from anywhere in the world? If so, you could benefit from taking your factory to the edge. ... Reading information directly from our fieldbus-connected devices works great for a retrofit if you are an end user and not a programmable logic controller (PLC) programmer, or if you do not have access to the controller in the system because the integrator did not provide source codes. You can use a number of protocol converters and commercially available edge connection devices to take your machine-level data to an edge platform with some basic education online. For a large number of users, this option will get their factory “talking” to them for minimal human or equipment capital. It will require only protocol conversion and an edge connector (which we will discuss in a moment) and the cloud setup of choice, which can be outsourced.


MosaicLoader Malware Delivers Facebook Stealers, RATs

Once installed on a machine, the malware creates a complex chain of processes, according to Bitdefender. Its hallmark, researchers said, is a unique obfuscation technique that shuffles small code chunks around resulting in an intricate, mosaic-like structure – hence the name. The first stage of the execution flow is the installation of a dropper, which mimics legitimate software: Most of the first-stage droppers that researchers analyzed have icons and “version information” that mirror those used for legitimate applications. In some cases, the dropper pretends to be a NVIDIA process, for instance. The dropper makes contact with the C2 (the URL of the C2 is hardcoded as a string), then downloads a .ZIP file into the %TEMP% folder that contains two files required for the second stage: appsetup.exe, and prun.exe. These are extracted to an innocuous-sounding “PublicGaming,” folder in the C: directory, while the dropper also launches several instances of Powershell to add exclusions from Windows Defender for the folder and the specific file names.


The biggest remote communication challenges within organisations

Zooming back out to an organisational level, recent events have pushed leadership teams to fully embrace digital transformation. For many organisations, making remote work plausible meant pulling together capabilities from a range of technology providers into something of a patchwork of solutions, that didn’t necessarily behave well together but was necessary given the organisational shock felt initially. Recognising that remote working is going to be a significant and constant part of our working landscape, it is now time to think about how to make this tech stack work more effectively. In many cases, this will involve consolidation, ideally onto a single CRM platform, where the sharing of customer and prospect data between marketing, sales and customer service teams is seamless, and where the platform supports growth, instead of creating friction points. ... The effects of COVID-19 disrupted the working landscape profoundly last year, meaning that UK organisations have had to rethink their working strategies. It is vital that business leaders constantly keep in touch with their employees and support them when these changes are taking place. 


Image encryption technique could keep photos safe on popular cloud photo services

Now researchers have created a way for mobile users to enjoy popular cloud photo services while protecting their photos. The system, dubbed Easy Secure Photos (ESP), encrypts photos uploaded to cloud services so that attackers – or the cloud services themselves – cannot decipher them. At the same time, users can visually browse and display these images as if they weren’t encrypted. “Even if your account is hacked, attackers can’t get your photos because they are encrypted,” said Jason Nieh, professor of computer science and co-director of the Software Systems Laboratory. ESP employs an image encryption algorithm whose resulting files can be compressed and still get recognized as images, albeit ones that look like black and white static to anyone except authorized users. In addition, ESP works for both lossy and lossless image formats such as JPEG and PNG, and is efficient enough for use on mobile devices. Encrypting each image results in three black-and-white files, each one encoding details about the original image’s red, green, or blue data.



Quote for the day:

"Leaders can choose to grow and change, but generally the most powerful predictor of future performance is past behavior. Evaluate them realistically." -- Lee Ellis

Daily Tech Digest - July 20, 2021

3 Ways To Make Conversational AI Work For Your Organization

AI systems possess features unlike any mechanisms we use in human-human conversation. Consequently, you can use them in powerful ways to create conversations and experiences that go beyond what’s possible with people alone. Unlike humans, AI can be available around the clock -- whether to answer a question in the middle of the night or to support an asynchronous conversation that stretches over many days. In addition, machines have an absence of emotion and moral judgment that provides a distinct advantage in some situations. When the subject of a conversation is sensitive, interactions with AI can afford a degree of anonymity that some customers welcome. And when it comes to detecting patterns, AI excels at detecting fraud or breaches of regulatory requirements. AI is vigilant about events about to happen and can proactively engage in anticipation, thereby creating superior experience. And finally, AI is moving to a point where it can literally read your mind. ...  Another point of tension is the potential for manipulation. Persuasive computing can change people’s attitudes or behaviors, while practices like hyper nudging use data to influence people to certain decisions.


Making transformation stick

Leaders must model the behaviors that will be required to sustain change. This can be done with literal acts and symbolic acts that communicate to rank-and-file employees the leaders’ commitment to the transformation. A study by the National Institute for Health Research in the UK highlights the importance of role modeling. The institute reviewed transformation programs in clinical settings and found that out of a variety of factors affecting the longevity of the transformation, senior and clinical leader role modeling was the highest predictor of sustainable change. The study defined role modeling as leaders being seen promoting and investing in the change. The transformation experience of one of our clients bears out this finding. The company recently adopted customer relationship management software that features a tool for gathering insights from client meetings. But using the tool requires the company’s client-facing employees to write up meeting notes, something many find tedious. So, the CEO of the business regularly uses the tool and sends notifications of his written reports to his executive team and their direct reports. This is a powerful example of role modeling.


How smarter data analysis can transform financial planning

Reliance on legacy spreadsheets is inefficient and causes a tremendous amount of overhead and friction for analysts – the opposite of what you want in a process that should be essential for every business. Many of the solutions to these problems involve moving away from Excel entirely, which also isn’t practical in many cases. Smaller businesses, in particular, may not have the time or manpower to migrate their data and the deep logic they’ve built into their Excel sheets to a new platform. “While the rest of the business world moves to powerful, cloud-based SaaS solutions driven by AI and automation, finance departments remain entrenched in Excel,” says Gurfinkel. “While it’s a powerful tool, it lacks modern features that could help drive better forecasting. The ideal solution is one that builds on Excel to leverage its strengths while minimising its weaknesses, rather than trying (and failing) to replace it.” “Automation” has nearly reached buzzword status at this point, but that doesn’t mean the advantages it offers aren’t real. Automation has the potential to transform nearly every facet of work – including financial planning.


Banking is broken. This small FinTech startup plans to fix it

The sheer breadth of banking services Modularbank covers is one of the company's key strengths, says Vene, who points out that competitors have often had to partner with third-party firms to provide the same services. She also believes that the decades of technology and banking experience under Modularbank's belt mean it can tackle complex use cases and customer demands more comfortably than some of its competitors. "To build highly configurable modules, you have to know the product side of finance well. It's not enough to have great technology and great engineers in your team if you don't know what the customer needs to configure in your products," says Vene. Security is another area where experience plays a critical role, and arguably nowhere is this more important than in finance. "We have been working in this field for so many years with highly regulated organizations, so it was normal for us to focus on liability and security from day one," says Vene. For instance, GDPR compliance has been designed into Modularbank's products from the beginning, she says. 


How We Tracked a Threat Group Running an Active Cryptojacking Campaign

After the attackers find and enter into a Linux device with inadequate SSH credentials, they deploy and execute the loader. In the current campaign, they use .93joshua, but they have a couple of others at their disposal; .purrple and .black. All of the loaders are obfuscated via shc. The loader gathers system information and relays it to the attacker using an HTTP POST to a Discord webhook. By using Discord, the threat actors circumvent the need to host their own command-and-control server, as webhooks are means to post data on Discord channel programmatically. The gathered data can also be conveniently viewed on a channel. Discord is increasingly popular among threat actors because of this functionality, as it involuntarily provides support for malware distribution (use of its CDN), command-and-control (webhooks) or creating communities centered around buying and selling malware source code and services (e.g. DDoS). The information gathered at this step lets the threat actor witness the effectiveness of their tools in infecting machines. The list of victims may also be collected to carry out potential post-exploitation steps.


New AI-Based Augmented Innovation Tool Promises to Transform Engineer Problem Solving

What will often happen is that as you work through both the “Functional Concepts” and “Inventive Principles” lists you begin to realize that you’ve omitted elements to your description, or that your description should go in a slightly different direction based on the results. While this represents a slightly iterative process, each iteration is just as fast as the first. In fact, it's faster because you no longer need to spend 10 minutes writing down your changes. All along the process, there's a workbook, similar to an electronic lab notebook, for you to jot down your ideas. As you jot down your ideas based on the recommendations from the AI, it will offer you the ability to run a concept evaluation, telling you whether the concept is “marginally acceptable” or “good”, for example. You can use this concept evaluation tool to understand whether you have written your problem and solution in a way that it's unique or novel, or whether you should consider going back to the drawing board to keep iterating on it.


Unconventional Superconductor May Unlock New Ways To Build Quantum Computers

Scientists on the hunt for an unconventional kind of superconductor have produced the most compelling evidence to date that they’ve found one. In a pair of papers, researchers at the University of Maryland’s (UMD) Quantum Materials Center (QMC) and colleagues have shown that uranium ditelluride (or UTe2 for short) displays many of the hallmarks of a topological superconductor — a material that may unlock new ways to build quantum computers and other futuristic devices. “Nature can be wicked,” says Johnpierre Paglione, a professor of physics at UMD, the director of QMC and senior author on one of the papers. “There could be other reasons we’re seeing all this wacky stuff, but honestly, in my career, I’ve never seen anything like it.” All superconductors carry electrical currents without any resistance. It’s kind of their thing. The wiring behind your walls can’t rival this feat, which is one of many reasons that large coils of superconducting wires and not normal copper wires have been used in MRI machines and other scientific equipment for decades.


Combating deepfakes: How we can future-proof our biometric identities

Deepfakes refer to manipulated videos or other digital representations produced by sophisticated artificial intelligence (AI), which yield fabricated images and sounds that appear to be real. While video deepfakes are arguably the most common, audio deepfakes are also growing in popularity. ... Firstly, we must think about how biometric authentication works. Take voice biometrics as an example: a good fake voice (even just a good impersonator) can be enough to fool a human. However, voice biometric software is much better at identifying differences that the human ear either doesn’t discern or chooses to ignore, which means that voice biometric ID can help prevent fraud if identity is checked against the voice. Even so-called deep fakes create a poor copy of someone’s voice when analyzed at the digital level; they make quite convincing cameos, especially when combined with video, but again these are poor imitations at a digital level. Outside of this, the ability for deepfakes to bypass biometrics-based solutions will ultimately be dependent on the type of liveness detection that is integrated into the solution. 


Is EDR The Silver Bullet For Malware?

Absolute security isn’t possible, as we all know — our control framework is only as strong as our weakest link. In recent years, we’ve seen great strides in innovation surrounding virtualization tools. This new technology, while useful to organizations and users in general, has also given hackers more power to bypass traditional defenses. To prove this, I carried out a small exercise — I aimed to avoid an EDR solution using a virtualization tool. Virtualization technology has opened up many doors for businesses hoping to scale up, but security controls haven’t scaled fast enough to secure these virtualized environments. As such, we’re currently only focused on deploying EDR solutions on physical endpoints because many people assume that anything running on a physical host will be protected from malicious activities. When it comes to virtualization, these tools create an opaque layer on which they manage an environment. Because of this, any EDR solution running on the physical host won’t have visibility on the files and services running on that virtualized image. I used this concept to bypass an EDR solution running on a physical host to simulate an attack on the network.


Moving into "Modern Test Leadership"

Test leaders can ignite passion in testers by finding ways to engage them. Start a community of practice, share blogs, videos, podcasts or get external speakers to come and share their wisdom with the team. You may find from trying some of these, that some of the testers may start wanting to try new ways of testing, or start learning new skills. The next step would be to nurture that passion, point them in the right direction for their career and let them run with it. ... The role of a test leader needs to change; gone are the days of a test manager being the sole point of contact from a quality perspective and being responsible for handing out testing tasks to a team. With the world of agile/DevOps becoming a lot more prominent, the role needs to evolve to being more a test coach, advocating for good testing practices, helping to evolve the culture, raising awareness of what the testers can do and what good quality is. They need to be a servant leader and support their team to fulfil their potential. Being a test leader in the current world is a challenge, but you really can reap what you sow. 



Quote for the day:

"Leadership should be born out of the understanding of the needs of those who would be affected by it. " -- Marian Anderson

Daily Tech Digest - July 19, 2021

IoT security: Development and defense

While IoT adoption continues to grow, the standards, compliance requirements and secure coding practices surrounding IoT have not advanced at the same rate. Recent high profile software supply chain attacks have brought the issue of secure coding into sharp focus, prompting the Biden administration to issue an executive order addressing new requirements for federal agencies to only purchase and deploy secure software. This pivotal shift will have an immediate impact on global software development processes and lifecycles, especially when you consider the vast reach of U.S. federal procurement. Virtually all device manufacturers and software companies will be impacted directly as the administration begins to increase obligations on the private sector and establish new security standards across the industry. Specific to IoT, the order directs the federal government to initiate pilot programs to educate the public of the security capabilities of IoT devices, and to identify IoT cybersecurity criteria and secure software development practices for a consumer-labeling program.


Efficient unit-testing with a containerised database

The real problem is mixing two languages in one body of code. The dbUtil handle is just a boilerplate reduction device here. The raw SQL is still there. We still can’t test the complex individual statements separate from the simple yet crucial control logic captured in the if-statements, which depend solely on the state of the person object, not on the database. Sure, we can test this control logic fine if we mock out the calls to the database. The mock for dbUtil returns a prepared list of person objects, and we can verify the correct invocation of it for the two different conditions. That unavoidably leaves the SQL untested. If we want to test the execution of these statements, we need to run the entire code inside the for loop, this time using a real database. That test needs to set up the conditions for all the three execution paths (condition 1, 1 and 2, or none), as well as verify what happened to the state after executing the void statement executions. It can be done, but we are of necessity testing both the Java and SQL realms here. That’s hardly the lean unit testing we’re looking for.


Ansible vs Docker: A Detailed Comparison Of DevOps Tools

Ansible is an open-source automation engine that helps in DevOps and comes to the rescue to improve your technological environment’s scalability, consistency, and reliability. It is mainly used for rigorous IT tasks such as configuration management, application deployment, intraservice orchestration, and provisioning. In recent times, Ansible has become the top choice for software automation in many organizations. Automation is one of the most crucial aspects of industries these days. Unfortunately, many IT environments are too complex and often require to be scaled too quickly for system administrators and developers to keep up, rather than manually. ... Docker is an open-source platform application for developing, shipping, and running applications. It enables developers to package applications into containers, a set of standardized and executable components that combine the application source code with the operating system libraries and dependencies required to run that code in an executable environment. Containers can even be created without Docker, but the platform and user interface make it easier, simpler, and safer to build, deploy and manage containers. 


Delegation and Scale: How Remote Work Affected Various Industries

The basic goal of delegation of authority is to enable efficient organization. Just as no single individual in a company can do all of the tasks required to achieve a group's goals, it becomes arduous for the management to wield all decision-making authority as a business expands. This is because there is a limit to the number of people a manager can successfully monitor and make decisions. When this threshold is reached, the authority must be handed to subordinates. While centralization was still a possibility before the pandemic, this was no longer the case after back-to-back lockdowns and economic slowdowns. In such a situation, the delegation came as a boon that not only kept the workflow active but also helped in scaling the growth. ... Delegating gives your team greater confidence, makes them feel important, and allows them to demonstrate their abilities. This will result in mutual appreciation with colleagues motivating one another to work more, and staying devoted to attaining the goals. 


Seeking a Competitive Edge vs. Chasing Savings in the Cloud

If companies do not make changes to their IT operations in response to a migration, finding savings can be more difficult, L’Horset says. “In the industry, there’s a lot of debate: Is cloud saving you money or not? Our research indicates that even at the basic level, yes it does,” he says. “The difference between the cost-savings, which you can get through cloud, and the value of innovation that you absolutely can and should get through cloud, is the fundamental reason you should go.” Roy Illsley, chief analyst with Omdia, the research arm of Informa Tech, says the cost benefits of cloud can be positive if the workload is variable in its resource requirements, its resource requirements match the cloud providers packaging of resources, or it requires high availability. "If the workload is stable in its resource requirements then on-premises is more cost effective," he says. Respondent companies to the Accenture survey that did not list cloud as a top priority still saw significant cost-savings, says Jim Wilson, managing director of information technology and business research at Accenture Research. 


7 Ways AI and ML Are Helping and Hurting Cybersecurity

AI/MI is used in network traffic analysis, intrusion detection systems, intrusion prevention systems, secure access service edge, user and entity behavior analytics, and most technology domains described in Gartner's Impact Radar for Security. In fact, it's hard to imagine a modern security tool without some kind of AI/ML magic in it. ... Through social engineering and other techniques, ML is used for better victim profiling, and cybercriminals leverage this information to accelerate attacks. For example, in 2018, WordPress websites experienced massive ML-based botnet infections that granted hackers access to users' personal information; ... Ransomware is experiencing an unfortunate renaissance. Examples of criminal success stories are numerous; one of the nastiest incidents led to Colonial Pipeline's six-day shutdown and $4.4 million ransom payment; ... ML algorithms can create fake messages that look like real ones and aim to steal user credentials. In a Black Hat presentation, John Seymour and Philip Tully detailed how an ML algorithm produced viral tweets with fake phishing links that were four times more effective than a human-created phishing message.


Electronic signatures: please sign on the digital line

First, let’s look at the importance of content to a business. In simple terms, content is the inherent value of a company. It’s NASA’s designs for their new space station, AstraZeneca’s highly regulated pharmaceutical patents, and Oxfam’s humanitarian aid records. It’s the clinical trial results for the next breakthrough vaccine, or the blueprint for the innovative new approach to flooding solutions. Content is the entire work of an organisation and is completely unique for every company. Content is the database of its most valuable insights. But to effectively realise this value, organisations need to find a single place for their content. Separating content between different silos and applications creates friction, which can stand in the way of employees accessing and sharing information, inhibiting innovation and productivity. Applications in today’s content-driven world are often judged by their ease of integration with other technologies. As a result, businesses are turning to single platforms where content can be securely stored and managed, while all compliance requirements are met and all teams have the opportunity to collaborate on the content, both internally and externally.


Protect your smartphone from radio-based attacks

An IMSI catcher is equipment designed to mimic a real cell tower so that a targeted smartphone will connect to it instead of the real cell network. Various techniques may be employed to do it, such as masquerading as a neighboring cell tower or jamming the competing 5G/4G/3G frequencies with white noise. After capturing the targeted smartphone’s IMSI (the ID number linked to its SIM card), the IMSI catcher situates itself between the phone and its cellular network. From there, the IMSI catcher can be used to track the user’s location, extract certain types of data from the phone, and in some cases even deliver spyware to the device. Unfortunately, there’s no surefire way for the average smartphone user to notice/know that they’re connected to a fake cell tower, though there may be some clues: perhaps a noticeably slower connection or a change in band in the phone’s status bar (from LTE to 2G, for example). Thankfully, 5G in standalone mode promises to make IMSI catchers obsolete, since the Subscription Permanent Identifier (SUPI) – 5G’s IMSI equivalent – is never disclosed in the handshake between smartphone and cell tower. 


The value of data — a new structural challenge for data scientists

Some companies with data scientists in place have difficulty operationalising their skills. If we look at the volumes of data processed by organisations, the different structures and architectures, it is not imperative to have a data scientist in its ranks of data experts. For companies managing an astronomical amount of data, on multiple channels and with a complex structure, the expertise of a data scientist will prove beneficial in modeling data, query it and make predictions. One of the first questions to ask is therefore related to data and business needs and to organise the structure according to an organisation’s structure and its data strategy. Companies have also realised that having a data scientist was not the answer to their data value problems. This is partly due to a lack of understanding in the environment surrounding data. A data scientist may understand the data, but not its purposes and environments or business applications. Let’s take the example of a marketing department working on implementing AI to accelerate its web ROI. 


Interview With Prof B Ravindran, Head, Robert Bosch Centre For Data Science & AI

Interpretability of deep learning models is essential for widespread adoption of these techniques in the Medical image diagnosis community. Deep learning models have been phenomenally successful at beating state of the art in common medical image diagnosis tasks like segmentation and screening applications, e.g. classification of diabetic retinopathy and chest X-ray scans, among others. While these successes have created huge interest in adopting these techniques in clinical practice, a huge barrier in adoption is the lack of interpretability of these models. Convolutional Neural Networks with hundreds of layers is the workhorse for medical image diagnosis. While the initial layers are typically edge detectors and shape detectors, it is fairly impossible to explain or interpret the feature maps as one goes deeper into the network. In order for clinicians to trust the output from these networks, it is essential that a mechanism for explaining the output be present. In addition, black-box techniques will make it hard for clinicians to justify the diagnosis and follow up procedures.



Quote for the day:

"Honor bespeaks worth. Confidence begets trust. Service brings satisfaction. Cooperation proves the quality of leadership." -- James Cash Penney