Daily Tech Digest - February 07, 2021

5 Reasons Why You Should Code Daily As A Data Scientist

By coding on a daily basis, you will gain the ability to integrate Data Science or other elementary concepts into your programming of developmental projects. The best part about practicing to code each day is it allows you, the coders, to subconsciously develop an innovative skill where you can solve computational tasks in the form of code. By gaining more information and learning more details, you can use code blocks more effectively and efficiently to solve complex programming architectures. You can even find solutions to Data Science tasks that you might have otherwise struggled with because you are in touch with the subject. And it also helps you gather discrete and more innovative ideas for implementation faster. This point is quite self-explanatory. Coding daily will sharpen your deduction abilities and tune you into a better programmer. By following the accurate principles of coding, you can improve your skills to the next level and develop yourself into an ultimate genius of programming. For example, let us take into consideration the Python programming language, which is most commonly used for Data Science. By developing your coding skills in Python, you are also advancing your skill level of Data Science to code complex Data Science projects.


Privacy and Security Are No Longer One-Size-Fits-All

Historically, relationships can be challenging cross-divisionally between InfoSec, IT, Legal, and business or product teams in the sense that the internal conversations and ongoing education required for these teams to sync on protocols and security measures can be extensive. Moreover, policy enforcement between these teams without proper technical safeguards can result in the accidental ー or in rare instances, malicious ー leakage of data that would put an individual or business’ reputation at risk. By taking a more contextual, technically enforced approach to data governance, the all-too-common risks associated with human-led processes can be eliminated and in turn, improve collaboration and enforcement between these internal teams. When the right technical controls are in place based on the data context, using privacy-enhancing techniques or in partnership with a reliable technology partner, enterprises no longer have to rely on the manual human processes that may lead to sensitive data leakage or re-identification risk. Only this alignment between internal teams will allow an enterprise’s overall data strategy to flourish. Many of the world’s most successful companies, such as Amazon or Unilever, have gained sizable market share through data collaboration, starting within their own enterprise.


Providing Customer-Driven Value With A ToGAF® Based Enterprise Architecture

The concept of value needs to be mastered and used by enterprise architects in a customer-driven enterprise, as shown in Figure 1 above. An organization usually provides several value propositions to its different customer segments (or persona) and partners that are delivered by value streams made of several value stages. Value stages have internal stakeholders, external stakeholders, and often the customer as participants. Value stages enable customer journey steps, are enabled by capabilities and operationalized by processes (level 2 or 3 usually). The TOGAF® Business Architecture: Value Stream Guide video provides a very clear and simple explanation, should you want to know more. Customer journeys are not strictly speaking part of business architecture, but still, be very useful to interface with business stakeholders. These value streams/stages cannot be realized out of thin air. An organization must have the ability to achieve a specific purpose, which is to provide value to the triggering stakeholder, in occurrence the customers. This ability is an enabling business capability. Without this capability, the organization cannot provide value to triggering stakeholders (customers). a capability enables a value stage and is operationalized by a business process.


A Brief History of Metadata

A primary goal of metadata is to assist researchers in finding relevant information and discovering resources. Keywords used in the descriptions are called “meta tags.” Metadata is also used in organizing electronic resources, providing digital identification, and supporting the preservation and archiving of data. Metadata assists researchers in discovering resources by locating relevant criteria and providing location information. In terms of digital marketing, metadata can be used to organize and display content, maximizing marketing efforts. Metadata increases brand visibility and improves “findability.” Different metadata standards are used for different disciplines (such as digital audio files, websites, or museum collections). A web page, for example, a may contain metadata describing the software language, the tools used to create it, and the location of more information on the subject. ... Metadata found online and in digital marketing is a crucial tool for modern marketing. Metadata can help people find a website. It makes web content more searchable, and when used efficiently, metadata can increase the number of visits. 


Center for Applied Data Ethics suggests treating AI like a bureaucracy

People privileged enough to be considered the default by data scientists and who are not directly impacted by algorithmic bias and other harms may see the underrepresentation of race or gender as inconsequential. Data Feminism authors Catherine D’Ignazio and Lauren Klein describe this as “privilege hazard.” As Alkhatib put it, “other people have to recognize that race, gender, their experience of disability, or other dimensions of their lives inextricably affect how they experience the world.” He also cautions against uncritically accepting AI’s promise of a better world. “AIs cause so much harm because they exhort us to live in their utopia,” the paper reads. “Framing AI as creating and imposing its own utopia against which people are judged is deliberately suggestive. The intention is to square us as designers and participants in systems against the reality that the world that computer scientists have captured in data is one that surveils, scrutinizes, and excludes the very groups that it most badly misreads. It squares us against the fact that the people we subject these systems to repeatedly endure abuse, harassment, and real violence precisely because they fall outside the paradigmatic model that the state — and now the algorithm — has constructed to describe the world.”


Can IoT accelerate a return to offices?

Thanks to recent developments, both the Internet of Things (IoT) and the Industrial Internet of Things (IIoT) are making Internet-enabled devices an increasingly common feature in business. This trend was already on the upswing prior to the global lockdown, with 46% of IT, telecommunications, and business managers saying their organizations had already invested in IoT applications and/or services, while another 30% were readying to invest in the next 12-24 months. But with millions of workers now working and communicating virtually from remote locations, the role of IoT in helping to realize the vision of the smart enterprise has drawn even greater interest. For one thing, IoT has gained traction as an enabler of productive remote work and unhindered team collaboration across industries, said Igor Efremov the head of HR at Itransition, a Denver, US-based software development company. Sectors that formerly could not continuously function without human labor such as manufacturing, can now rely on IIoT infrastructure including sensors, cameras, and endpoints to allow technicians to monitor and maintain asset performance in real time, without actually being physically present.


Is there a human consequence to WFH?

There is no such thing as a perfect work/life balance, of course. And while some employers understand the stress faced by their workers, not all do: Switched-on employers who care about their staff provide employee choice schemes, target-related bonuses, personal support, subsidized Wi-Fi, and a judgement-free attitude toward sick days. Those who don’t, insist workers stay on camera all day and refuse to accept excuses for absence on the basis of child care or any other need, while indulging in fire and re-hire policies. To a great extent, corporate responsibility around employee care in this environment has effectively been outsourced to employees themselves, even as productivity (and working hours) increase. And while Apple’s devices and third-party apps can help remote workers manage time more effectively, the need for all parties to develop new ways of working that don’t impact personal space remains challenging. This isn’t a platform-specific matter, of course: Windows or Mac, Android or iPhone, iPad or some other tablet, enterprise workers of every stripe face complex challenges as they juggle work and personal responsibilities.


You Should Master Data Analytics First Before Becoming a Data Scientist

As a Data Scientist, you will have to perform feature engineering, where you will isolate key features that contribute to the prediction of your model. In school or wherever you learned Data Science, you may have a perfect dataset that is already made for you, but in the real world, you will have to use SQL to query your database to start finding the necessary data. In addition to the columns that you already have in your tables, you will have to make new ones — usually, these are new features that can be aggregated metrics like clicks per user, for example. As a Data Analyst, you will practice SQL the most, and as a Data Scientist, it can be frustrating if all you know is Python or R — and you can not rely on Pandas all the time, and as a result, you cannot even start the model building process without knowing how to efficiently query your database. Similarly, the focus on analytics can allow you to practice creating subqueries and metrics like the one stated above so that you can add a few to at least, say 100, new features that are completely created from you that could be more important than the base data that you have now. ... A Data Analyst usually will master visualizations because they have to present findings in a way that is easily digestible for others in the company. 


Introduction - The 4 Stages of Data Sophistication

Companies today are quite good at collecting data - but still very poor at organizing and learning from it. Setting up a proper Data Governance organization, workflow, tools, and an effective data stack are essential tasks if a business wants to gain from it’s information. This book is for organizations of all sizes that want to build the right data stack for them - one that is both practical and enables them to be as informed as possible. It is a continually improving community driven book teaching modern data governance techniques for companies at different levels of data sophistication. In it we will progresses from the starting setup of a new startup to a mature data driven enterprise covering architectures, tools, team organizations, common pitfalls and best practices as data needs expand. The structure and original chapters of this book were written by the leadership and Data Advisor teams at Chartio, sharing our experiences working with hundreds of companies over the past decade. Here we’ve compiled our learnings and open sourced them in a free, open book.


Data Residency Compliance with Distributed Cloud Approach

Rather than providing a centralized solution for everything, a distributed cloud can meet each specific customer and country requirement. It also gives enterprises the ability to properly use their original investments in existing central clouds while executing a unified cloud strategy for location-based data needs. This is especially important when customers are looking to utilize SaaS solutions that depend on central clouds since they often can’t easily localize data independently. Hybrid clouds were originally intended to enable a unified strategy. Yet, enterprises continue to struggle to get the level of value they initially expected out of their private cloud deployments, especially in compliance-centric use cases where ongoing research and expertise is needed. However, enterprises can now consider a distributed cloud-based offering such as ‘Data Residency-as-a-Service’ to meet global compliance standards. As businesses look to expand into new countries that require data sets to be localized in different regions, it’s essential to stay ahead of these challenges or they risk losing business in the region altogether.



Quote for the day:

"Brilliant strategy is the best route to desirable ends with available means." -- Max McKeown

Daily Tech Digest - February 06, 2021

Artificial intelligence must not be allowed to replace the imperfection of human empathy

In the perfectly productive world, humans would be accounted as worthless, certainly in terms of productivity but also in terms of our feeble humanity. Unless we jettison this perfectionist attitude towards life that positions productivity and “material growth” above sustainability and individual happiness, AI research could be another chain in the history of self-defeating human inventions. Already we are witnessing discrimination in algorithmic calculations. Recently, a popular South Korean chatbot named Lee Luda was taken offline. “She” was modelled after the persona of a 20-year-old female university student and was removed from Facebook messenger after using hate speech towards LGBT people. Meanwhile, automated weapons programmed to kill are carrying maxims such as “productivity” and “efficiency” into battle. As a result, war has become more sustainable. The proliferation of drone warfare is a very vivid example of these new forms of conflict. They create a virtual reality that is almost absent from our grasp. But it would be comical to depict AI as an inevitable Orwellian nightmare of an army of super-intelligent “Terminators” whose mission is to erase the human race.


The robots are ready – how can business leaders take the leap?

Robots and intelligent technology can now optimise something we’ve never been able to before: the bandwidth of employees. This has become increasingly more critical as staff adjust to remote working. By onboarding these new tools and incorporating them into the workforce, businesses can empower their staff to do more. They can automate mundane and repetitive tasks extremely quickly, giving their human colleagues more time to take on problem-solving and time-consuming tasks. In fact, 4 in 5 employees that use robots and digital workers say they have been beneficial with efficiency and collaboration, and are useful in easing the burden of administrative tasks. Employees have found that a ‘robotic helping hand’ has been most appreciated for sorting data and documents, providing prompts for pending tasks, and digitising paperwork. What’s also clear is that some businesses do have the right tools in place to help. In fact, half of UK employees said processes helped them do their job faster and collaborate better, both critical during the pandemic. However, for business leaders, the pressure to get automation right is huge. It’s a major investment of time, money, and energy for everyone involved. 


Why process mining is seeing triple-digit growth

Many enterprises are finding it difficult to scale beyond a few software robots or bots because they are automating a bad process that cannot scale. “Most businesses are automating processes through RPA and hyperautomation without first fully understanding their data and processes,” explained Gero Decker, CEO of Signavio, a SAP spinoff focused on business transformation. As enterprises pursue increased efficiencies, there is debate about whether it makes more sense to automate what exists or to fix it first. Automating a bad process may make it faster, but it may also suffer from chokepoints caused by integration with legacy systems or approval processes. Process mining can help a company fix a bad process first. Chris Nicholson, CEO of Pathmind, a company applying AI to industrial operations, argues, “The main challenge to overcome before applying process automation is to standardize the current processes performed by people. If they are not standardized, there can be no automation.” With process mining, companies can see whether their current processes are standardized so they know which problem they have to solve first: standardization or automation.


Sophisticated cybersecurity threats demand collaborative, global response

The cybersecurity industry has long been aware that sophisticated and well-funded actors were theoretically capable of advanced techniques, patience, and operating below the radar, but this incident has proven that it isn’t just theoretical. We believe the Solorigate incident has proven the benefit of the industry working together to share information, strengthen defenses, and respond to attacks. Additionally, the attacks have reinforced two key points that the industry has been advocating for a while now—defense-in-depth protections and embracing a zero trust mindset. Defense-in-depth protections and best practices are really important because each layer of defense provides an extra opportunity to detect an attack and take action before they get closer to valuable assets. We saw this ourselves in our internal investigation, where we found evidence of attempted activities that were thwarted by defense-in-depth protections. So, we again want to reiterate the value of industry best practices such as outlined here, and implementing Privileged Access Workstations (PAW) as part of a strategy to protect privileged accounts.


AI Transformation in 2021: In-Depth guide for executives

AI transformation touches all aspects of the modern enterprise including both commercial and operational activities. Tech giants are integrating AI into their processes and products. For example, Google is calling itself an “AI-first” organization. Besides tech giants, IDC estimates that at least 90% of new organizations will insert AI technology into their processes and products by 2025. ... First few projects should create measurable business value while being attainable. This is important for the transformation to gain trust across the organization with achieved projects and it creates momentum that will lead to AI projects with greater success. These projects can rely on AI/ML powered tools in the marketplace or for more custom solutions, your company can run a data science competition and rely on the wisdom of hundreds of data scientists. These competitions use encrypted data and provide a low cost way to find high performing data science solutions. bitgrit is a company that helps companies identify AI use cases and run data science competitions. Implementing process mining tools is one of those easy-to-achieve and impactful projects. For example, QPR’s Process Analyzer tool has an extensive set of ready-to-use process mining analyses, including ready-to-use clustering analysis and process predictions, as well as a platform for machine learning based analyses.


Microsoft Says It's Time to Attack Your Machine-Learning Models

Machine-learning researchers are focused on attacks that pollute machine learning data, epitomized by presenting two seemingly-identical image of, say, a tabby cat, and having the AI algorithm identify it as two completely different things, he said. More than 2,000 papers have been written in the last few years, citing these sorts of examples and proposing defenses, he said. "Meanwhile, security professionals are dealing with things like SolarWinds, software updates and SSL patches, phishing and education, ransomware, and cloud credentials that you just checked into Github," Anderson said. "And they are left to wonder what the recognition of a tabby cat has to do with the problems they are dealing with today." ... Anderson shared a red team exercise conducted by Microsoft where the team aimed to abuse a Web portal used for software resource requests and the internal machine-learning algorithm that determines automatically to which physical hardware it assigns a requested container or virtual machine. The red team started with credentials for the service, under the assumption that attackers will be able to gather valid credentials - either by phishing or because an employee reuses their user name and password.


Microsoft: Office 365 Was Not SolarWinds Initial Attack Vector

In its Thursday blog, the Microsoft team says the compromise techniques leveraged by the SolarWinds hackers included "password spraying, spear-phishing and use of webshell through a web server and delegated credentials." Earlier this week, acting CISA Director Brandon Wales told The Wall Street Journal that the SolarWinds cyberespionage operation gained access to targets using a multitude of methods, including password spraying and through exploits of vulnerabilities in cloud software (see: SolarWinds Hackers Cast a Wide Net). "As part of the investigative team working with FireEye, we were able to analyze the attacker’s behavior with a forensic investigation and identify unusual technical indicators that would not be associated with normal user interactions. We then used our telemetry to search for those indicators and identify organizations where credentials had likely been compromised by the [SolarWinds hackers]," Microsoft's security team says. But Microsoft says it's found no evidence that the SolarWinds hackers used Office 365 as an attack vector. "We have investigated thoroughly and have found no evidence they [SolarWinds] were attacked via Office 365," the Microsoft researchers say. "The wording of the SolarWinds 8K filing was unfortunately ambiguous, leading to erroneous interpretation and speculation, which is not supported by the results of our investigation."


Data loss prevention strategies for long-term remote teams

For many, a distributed hybrid workforce is the new normal, vastly expanding their threat landscape and making it more challenging to secure data and IT infrastructure. In this environment, companies need to pivot their defensive capacity, ensuring that they are prepared to meet the moment (i.e., the threats). When considering cybersecurity threats, we often think of shady cybercriminals or nation-states hacking company networks. After all, when these incidents occur, they make worldwide news headlines. For most companies, however, external bad actors aren’t the most critical risk. A company’s employees often pose a more prominent and – luckily – a more manageable cybersecurity threat. IBM estimates that human error causes nearly a quarter of all data breaches. Additionally, employees commonly and inadvertently compromise company data through poor password hygiene, accidental data sharing, improper technology use, phishing scams, and more. Some employees will also act maliciously, intentionally stealing company data for profit, retribution, or fun. The market for sensitive data is so prolific that some cybersecurity experts predict the emergence of insiders-as-a-service as bad actors capitalize on remote work trends to infiltrate companies.


The Rise of Responsible AI

In Public Safety arena using biased data to train the AI to identify criminals using cyber forensics can lead to the wrongful conviction of innocent people as the output of the software was influenced by racial and ethnicity data points introduced as either the code used was not tested properly or used wrong data sets for testing resulting in destroying lives. Apart from the bias in the data set we have also seen that during any application or transactional data processing there is no transparency as to find out why this decision was taken, which parameter influenced it and why did the algorithm took additional steps to mitigate it? All these can be easily answered by embedding explainability and transparency in the AI design processes to provide understandability of the context and interpretability of the decision by AI. Thus we need Responsible AI which is the practice of using AI with good intention to empower employees and businesses, and fairly impact customers and society – allowing companies to engender trust and scale AI with confidence along with the purpose of providing a framework to ensure the ethical, transparent and accountable use of AI technologies consistent with user expectations, organizational values and societal laws and norms.


Adaptive Frontline Incident Response: Human-Centered Incident Management

Many companies struggle with defining an incident. To us, an incident is when a service or feature functionality is degraded. But defining "degraded" contains a multitude of possibilities. One could say "degraded" is when something isn’t working as expected. But what if it’s better than expected? What’s the expected behavior? Do you define it based on customer impact? Do you wait until there’s customer impact to declare an issue an incident? This is where having a common and shared understanding of the normal operating behavior of the system and formalizing these in feature/service level objectives and indicators are key. We have to know what we expect, to know when a degradation becomes an incident. But, defining service level objectives for legacy services already in operation takes a significant investment of time and energy that might not be available right now. That’s the reality in which we frequently operate, trading off efficiency with thoroughness, as Hollnagel (2009) points out. We handle this tradeoff with a governing set of generic thresholds to fill in for services without clear indicators. At Twilio we have a lot of products, running the gamut from voice calls, video conferencing, and text messages, to email and two factor authentication.



Quote for the day:

"Don't look back. Something might be gaining on you." -- Satchel Paige

Daily Tech Digest - February 05, 2021

Riding out the wave of disruption

Disruption is not necessarily the crisis it’s frequently considered to be for incumbents, the researchers stress. Two technologies can often coexist in the marketplace for a significant period. Thus, it’s important for incumbent companies not to overreact. They should target dual users and reexamine the factors that have led to the old technology sticking around for so long. Of course, the profit implications of cannibalization of the old technology and leapfrogging depend on which type of firm is trumpeting the new technology. New entrants will always stand to gain when they introduce a technology that takes off. But incumbents rolling out a successive technology will also gain if their competitors would have introduced it anyway or if the 2.0 version has a higher profit margin than the original. The authors write, “Leapfroggers are an opportunity loss for incumbents, but switchers are a real loss.” Regardless of the predictive model they use, marketers should strive to understand how the various consumer segments identified in this study will grow or shrink over time and use that information in their forecasts of early sales or market penetration of successive technologies.


AI and APIs: The A+ Answers to Keeping Data Secure and Private

Adding to the complexity is ensuring that AI and data are used ethically, Marques points out. Two key categories comprise secure AI, he says: responsible AI and confidential AI. Responsible AI focuses on regulations, privacy, trust, and ethics related to decision-making using AI and ML models. Confidential AI involves how companies share data with others to address a common business problem. For example, airlines might want to pool data to better understand maintenance, repair, and parts failure issues but avoid exposing proprietary data to the other companies. Without protections in place, others might see confidential data. The same types of issues are common among healthcare companies and financial services firms. Despite the desire to add more data to a pool, there are also deep concerns about how, where, and when the data is used. In fact, complying with regulations is merely a starting point for a more robust and digital-centric data management framework, Jahil explains. Security and privacy must extend into a data ecosystem and out to customers and their PII. For example, CCPA has expanded the concept of PII to include any information that may help identify the individual, like hair color or personal preferences.

What is a data center REIT?

The rationale for converting to REIT status will vary from company to company, but broadly it offers beneficial tax status and greater access to capital for growth. “The biggest benefit is that REITs don’t pay any corporate tax,” says Millionacre’s Frankel. “Think of a data center company that isn't a REIT. Its income can effectively be taxed twice; once at the corporate level when the company earns a profit, and again on the individual level when the company pays a dividend to investors.” The rules on whether an organization can apply for REIT classification vary from country to country, but broadly having a portfolio of properties from which real-estate activities such as rent is the majority of your revenue is derived, and having a number of investors to which you provide the majority of that revenue, is the minimum requirement. “REITs are able to raise capital more easily via share issuances and/or joint venture partnerships as investors have a better idea of the company’s financial situation once public, says Cushman & Wakefield’s Imboden. “The degree of difficulty [on becoming a REIT] depends largely on if the company was structured and managed with the intention of becoming a REIT, or if the decision was made after years of operating.”


12 security career-killers (and how to avoid them)

“The biggest problem I’ve seen is security people who think security is the be-all and end-all. They go in with that attitude, and they don’t see how they have to enable the business,” says James Carder, CSO of the security tech company LogRhythm. He says they instead need to collaborate with their business-unit colleagues to understand their objectives and then be an enabler, not a hinderance. Others agree. “Security is a profession that has plenty of standards and regulations and frameworks, but too many times we try to implement them in a blind way, from the perspective of the standards instead of trying to implement them in the context of the business,” adds Russ Kirby, CISO of software company ForgeRock. Similarly, Kirby has seen security pros become so focused on their own objectives that they alienate other departments that may otherwise want to work together to find a solution. He points to one scenario, where security staffers wanted to change an application’s minimum password length from 8 characters to upwards of 20. The IT application team pushed back, explaining that they could go to 12 characters but anything more would take significant time and money to change.


Six industries impacted by the combination of 5G and edge computing

"Weather and humidity can impact the performance of 5G,'' Roberts added; he also noted that, as 5G continues to proliferate, there will be many more cell towers. That's consistent with recently released research by PwC, which reported that "the performance of 5G networks remains uneven." Widespread usage is not here yet "because it's a big challenge to upgrade infrastructure," agreed Mark Sami, a director at West Monroe. Right now, for example, to get Verizon's Ultra Wideband network, "you need a line of sight to a tower so you have to be in close proximity," Sami said. ... "It's all about driving applications and how do you make these 5G and edge solutions [work] in a manner where you create more opportunities for the developer community to write applications to that infrastructure architecture,'' said Sid Nag, a vice-president at Gartner. Some 90% of industrial enterprises will use edge computing by 2020, according to Frost & Sullivan. "The applications are endless,'' observed Chris Steffen, a research director at Enterprise Management Associates. "Every vertical is going to be impacted in some way,'' he added, depending on specific use cases and relevance.


Why Disconnected Data Grinds Customer Journeys to a Halt

Business architecture matters because it defines and explains the relationships between customer business processes. And information and application architecture matter because they define the major types of information and the applications that process customer data. Clearly, this kind of systems thinking is essential to defining holistic customer journeys — or in the language of marketing, the friction points between customer facing systems and data that flows between them. Thinking this way raises questions like why customers need to interface with applications separately and why they have to enter data multiple times when interacting with these separate applications — two big sources of customer journey friction. Data limits the quality of the customer journey at three major points: a company’s sales, marketing and service processes. According to economist Theodore Levitt, any sales and marketing processes should focus on the following: “the role of marketing is creating and keeping the customer.” To create or obtain new customers, organizations must simplify the processes to become a customer, regardless of the customer channel chosen. In practice this means integrating customer facing systems, so customers enter information only once.


Rust Could Be the Secret to Next-Gen Computing

The team think there are good prospects for using ‘rust’ to create super-efficient computers. This is because although very simple in architecture, the Fe2O3-based device where merons and bimerons were found already contains all the ingredients to manipulate these tiny bits quickly and efficiently – by flowing a tiny electrical current in an extremely thin metallic ‘overcoat’. In fact, the team state that controlling and observing the movement of merons and bimerons in real time is the goal of a future X-ray microscopy experiment, currently in the planning phase. Moving from basic to applied research means cost and compatibility considerations are of paramount importance. While iron oxide is extremely abundant and cheap, the fabrication techniques employed by researchers at Singapore and Madison are complex and require atomic-scale control. However, the team are optimistic as they recently demonstrated that is possible to ‘peel off’ a thin layer of oxide from its growth medium and stick it almost anywhere, with its properties being largely unaffected. They say their next steps will be the design and fabrication of proof-of-principle devices based on ‘cosmic strings’ .


New Opportunities from Tech-Driven Industry Convergence

When we study the evolution of information technology, we find that companies traditionally leveraged technology solutions to serve specific business functions within an industry. For example, in life sciences or pharmaceutical companies, technology solutions were usually grouped by function such as commercial, R&D, and supply chain. Most answers were explicitly designed for the specific process and had little scope for portability across sectors. However, as technologies evolved, solutions have become increasingly broad-based and sector-agnostic. While cloud and high-tech companies still provide industry-specific solutions, there is a convergence in the types of problems they solve for customers across industries. ... As the lines are getting blurred, we need to rethink our traditional approach to grouping various sectors when building technology solutions. For instance, all consumer-facing industries such as CPG, pharma, insurance, and manufacturing are likely to have significant overlap in the challenges they face. Similarly, healthcare, finance, medical devices, retail, and telecommunications are likely to find common ground.


Networking software can ease the complexity of multicloud management

Cloud providers offer essential tools in three key areas: security, networking, and management and orchestration (MANO). Their security capabilities and controls often must be manually implemented, and their networking requires that their on-ramps and off ramps—which providers optimize--be specifically routed. Each cloud has its own MANO tools to provide management, visibility, and automation tools that must be set in order to gain visibility see and tune application performance. That means a learning curve and fragmented MANO for enterprise IT teams that support multicloud environments. These factors combine to make many IT operations involving IaaS multiclouds difficult to scale and the task of troubleshooting performance slowdowns tedious and time consuming. The leading IaaS providers are building new access capabilities at the edge of their networks. Key to user experience is network performance, which relies on network routing to and from the nearest cloud on-ramp. Leveraging WAN network intelligence is essential to delivering a reliable, high quality experience between applications in the public cloud and end-users. Enterprise IT will require the network intelligence to connect to the best IaaS point of presence to accelerate application delivery.


The transportation sector needs a standards-driven, industry-wide approach to cybersecurity

We have already witnessed attacks on electronic charging stations via the Near-Field Communication (NFC) card, which handles billing for EV charging. The ID cards have inherent vulnerabilities due to third-party providers not securing customer data. Research has shown malicious individuals can copy these cards and use them to charge other vehicles. Another concern is related to traditional lithium-ion batteries, which are used in EVs and have the potential to explode. While this issue is being addressed by battery suppliers with investment in R&D, this safety effort must also consider the risk of cyber attacks. If it’s known that a battery in an EV can explode, this may increase the likelihood that a bad actor may target this type of car with the intent to cause harm. As EV battery technology advances, it’s imperative that comprehensive cybersecurity measures evolve and improve in parallel so automakers and technology providers can prevent this type of hacking from occurring. As the AV industry advances, so will the incentives for hackers. There is an increased potential for financial crimes committed via ransomware attacks. Further, these attacks could cause vehicles to behave abnormally, potentially endangering human lives.



Quote for the day:

"To accomplish great things, we must not only act, but also dream, not only plan but also believe." -- Anatole France

Daily Tech Digest - February 04, 2021

5 Trends for Industry 4.0: The Factory of the Future

The growing complexity of machine software as well as the ongoing modularization of modern production equipment has led to more simulation upfront. The fact that international travel for commissioning or service has significantly reduced or in some cases halted these days reinforces this trend. Functional tests of production equipment of the future will be performed using comprehensive models for simulation and virtual commissioning. The factory of the future will be built twice—first virtually, then physically. Digital representations of production machines continuously fed with live data from the field will be used for health monitoring throughout the entire lifetime of the equipment and will eventually make onsite missions be an exception ... Flexible production in the factory of the future will require robots and autonomous handling systems to adapt faster to changing requirements. While classic programming and teaching of robots isn’t suitable for preparing the system to handle the huge and fast-growing number of different goods, future handling equipment will automatically learn through reinforcement learning and other AI techniques. The prerequisites—massive calculation power and huge amounts of data—have been established over the past years.


Runtime data no longer has to be vulnerable data

With all of these security advantages, you might think that CISOs would have quickly moved to protect their applications and data by implementing secure enclaves. But market adoption has been limited by a number of factors. First, using the secure enclave protection hardware requires a different instruction set, and applications must be re-written and recompiled to work. Each of the different proprietary implementations of enclave-enabling technologies requires its own re-write. In most cases, enterprise IT organizations can’t afford to stop and port their applications, and they certainly can’t afford to port them to four different platforms. In the case of legacy or commercial off-the-shelf software, rewriting applications is not even an option. While secure enclave technologies do a great job protecting memory, they don’t cover storage and network communications – resources upon which most applications depend. Another limiting factor has been the lack of market awareness. Server vendors and cloud providers have quickly embraced the new technology, but most IT organizations still may not know about them. 


Liquid Neural Network: What’s A Worm Got To Do With It?

Liquid networks make the model more robust by improving its resilience to unexpected and noisy data. For instance, it can make algorithms adjust to heavy rains that obscure a self-driving car’s vision. Liquid network makes the algorithm more interpretable. The network can help overcome the machine learning algorithms’ black-box nature because of the neurons’ expressive nature. The liquid network has performed better than other state-of-the-art time series by a few percentage points to predict future values in datasets used in atmospheric chemistry and traffic patterns. Apart from the high reliability, it also helped reduce computational costs. The researchers were aiming for fewer but richer nodes in the algorithm. In other words, the study focused on scaling down the network rather than scaling up. “This is a way forward for the future of robot control, natural language processing, video processing — any form of time series data processing,” said Ramin Hasani, the paper’s lead author. ... Tremendous progress has been made in developing smart bots that can perform multiple intelligent tasks like work alongside humans or give mental health advice. However, its adoption presents a significant concern in terms of safety and ethics.


Virtual Panel: The MicroProfile Influence on Microservices Frameworks

The term cloud-native is still a large gray area and it's concept is still under discussion. If you, for example, read ten articles and books on the subject, all these materials will describe a different concept. However, what these concepts have in common is the same objective - get the most out of technologies within the cloud computing model. MicroProfile popularized this discussion and created a place for companies and communities to bring successful and unsuccessful cases. In addition, it promotes good practices with APIs, such as MicroProfile Config and the third factor of The Twelve-Factor App. ... The use of reflection by the frameworks has its trade-offs. For example, at the application start and in-memory consumption, the framework usually invokes the inner class ReflectionData within Class.java. It is instantiated as type SoftReference, which demands a certain time to leave the memory. So, I feel that in the future, some frameworks will generate metadata with reflection and other frameworks will generate this type of information at compile time like the Annotation Processing API or similar. We can see this kind of evolution already happening in CDI Lite, for example. 


General Availability of the new PnP Framework library for automating SharePoint Online operations

Overtime the classic PnP Sites Core has grown into a hard to maintain code base which made us decide to start a major upgrade effort for all PnP .NET components. As a result of that, PnP Framework is a slimmed down version of PnP Sites Core dropping legacy pieces and dropping support for on-premises SharePoint in favor of improved quality and maintainability. If you’re still using PnP Sites Core with your on-premises SharePoint than that’s perfectly fine, we’re not going to pull these components but you’ll not see any updated versions going forward. PnP Framework is a first milestone in the upgrade of the PnP .NET components, in parallel we’re building a brand new PnP Core SDK using modern .NET development techniques focused on performance and quality (check our test coverage and documentation). Overtime we’ll implement more and more of the PnP Framework functionality in PnP Core SDK and then replace the internal implementation in PnP Framework. The modern pages API is good example: when you use that API in PnP Framework you’re actually using the implementation done in PnP Core SDK. Below picture gives an overview of our journey and the road ahead:


Endpoint Detection and Response: How Hackers Have Evolved

While kernel mode is the most elevated type of access, it does come with several drawbacks that complicate EDR effectiveness. In kernel mode, visibility can be quite limited as there are several data points only available in user mode. Also, third-party kernel-based drivers are often difficult to develop and if not properly vetted can lead to higher chances of system instability. The kernel is often regarded as the most fragile part of a system and any panics or errors in kernel mode code can cause huge problems, even crashing the system entirely. User mode is often more appealing to attackers as it has no way of directly accessing the underlying hardware. Code that runs in user mode must use API functions that interact with the hardware on behalf of the application, allowing for more stability and fewer system-wide crashes (as application crashes will not affect the system). As a result, applications that run in user mode need minimal privileges and are more stable. Suffice to say, a lot of EDR products rely heavily on user mode hooks over kernel mode, making things interesting for attackers. Since the hooks exist in user mode and hook into our processes, we have control over them. Since applications run within the user’s context, this means everything that's loaded into our process can be manipulated by the user in some form or another.


Continuous Delivery: Why You Need It and How to Get Started

For decades, enterprise software providers have focused on delivering large quarterly releases. "This system is slow because if there are any bugs in such a large release, developers have to sift through the deployed update in its entirety to find the problem to patch," said Eric Johnson, executive vice president of engineering for open-source code collaboration platform provider GitLab. Enterprises committed to CD rapidly deliver a string of highly granular releases. "This way, if there are any bugs in a new individual release they’re easily and swiftly addressed by developers' teams. Most developers appreciate CD because it helps them deliver higher quality work while limiting the risk of introducing unwanted change into production environments. CD ensures that the entire software delivery lifecycle from source control, to building and testing, to artifact release, and ultimately deployment into real environments, is automated and consistent, explained Brent Austin, director of engineering at Liberty Mutual Insurance. High levels of test automation are critical in CD, allowing developers to confidently introduce changes quickly with high confidence and higher quality. "CD also helps developers think in small batch sizes, which allows for easier and more effective rollback scenarios when issues are found and makes introducing change safer," Austin said.


Interview With a Russian Cybercriminal

Interacting with a ransomware operator is "unusual, but not that unusual," says Craig Williams, director of outreach for Cisco Talos. Of course, a key challenge in chatting with a criminal is knowing when to trust them. Researchers asked many questions they were able to verify, but there were scenarios in which they felt Aleks wasn't telling the whole story. Williams says the strongest example of this related to targeting the healthcare industry. "He pointed out how he didn't target healthcare customers … but then knew an awful lot about when healthcare paid, and in what situations they paid, and what type of data they have, and exactly how valuable it would be, and if they had insurance, they were more likely to pay," he explains. For example, Aleks reportedly told researchers hospitals pay 80% to 90% of the time. Aleks seems to choose victims based on their ability to pay quickly, Williams says, though the report notes the attacker's views may not represent those of LockBit group. For example, Aleks says the EU's General Data Protection Regulation (GDPR) may work in adversaries' favor. Victim companies are more likely to pay "quickly and quietly" so as to avoid penalties under GDPR.


The most important skills for successful AI deployments

As AI has bolstered the operations of more and more sectors, it’s become apparent that knowledge of the technology alone isn’t enough for deployments to succeed. Whether the AI solution is serving companies or individuals, the engineers behind the roll-out need to understand the business at hand. “The company needs people who know the principles of how these algorithms work, and how to train the machine, but can also understand the business domain and sector,” said Sanz-Saiz. Without this understanding, training an algorithm can be more complex. Any successful data scientist not only needs to bring technical expertise, but also needs to have domain and sector expertise as well.” Without sufficient industry knowledge, decision-making can become inaccurate, and in some cases, such as healthcare, it can also be dangerous. Companies such as Kheiron Medical have been using an AI solution to transform cancer screening, accelerating the process and minimising human error. For this to be effective, careful assessments and evaluations at every stage of the screening procedure need to be in place. “I think a commitment to clinical rigour needs to underpin everything that we do,” explained Sarah Kerruish, chief strategy officer at Kheiron.


Google’s New Approach To AutoML And Why It’s Gaining Traction

AutoML is an automated process of searching for a child program from a search space to maximise a reward. The researchers broke down the process into a sequence of symbolic operations. Meaning, a child program is turned into a symbolic child program. The symbolic program is further hyperified into a search space by replacing some of the fixed parts with to-be-determined specifications. During the search, the search space materialises into different child programs based on search algorithm decisions. It can also be rewritten into a super-program to apply complex search algorithms such as efficient NAS (ENAS). PyGlove is a general symbolic programming library on Python. Using this library, Python classes, as well as functions, can be made mutable through brief Python annotations, making it easier to write AutoML programs. The library also allows AutoML techniques to be quickly dropped into preexisting machine learning pipelines while benefiting open-ended research which requires extreme flexibility. PyGlove implements various popular search algorithms, such as PPO, Regularised Evolution and Random Search.



Quote for the day:

"If you can't handle others' disapproval, then leadership isn't for you." -- Miles Anthony Smith

Daily Tech Digest - February 03, 2021

Usability Testing: the Ultimate Guide [Free Checklist]

Generally speaking, usability testing comes in two types: moderated and unmoderated. Moderated sessions are guided by a researcher or a designer, while the unmoderated ones rely on users’ own unassisted efforts. Moderated tests are an excellent choice if you want to observe users interact with prototypes in real-time. This approach is more goal-oriented — it lets you confirm or disconfirm existing hypotheses with more confidence. On the other hand, unmoderated usability tests are convenient when working with a substantial pool of subjects. A large number of participants allows you to identify a broader spectrum of issues and points of view. However, it’s important to underline that testing isn’t that black and white. It’s best to look at this practice as a spectrum between moderated and unmoderated testing. Sometimes, during unmoderated sessions, we like to nudge our subjects into the right direction through mild moderation when necessary. Testing our prototypes can provide us with a wide array of insights. Fundamentally, it helps us spot flaws in our designs and identify potential solutions to the issues we’ve uncovered. We learn about the parts of our product that confuse or frustrate our users. By disregarding this step, we’re opening up to the possibility of releasing a product that causes too much friction.


Linux malware backdoors supercomputers

ESET researchers have reverse engineered this small, yet complex malware that is portable to many operating systems including Linux, BSD, Solaris, and possibly AIX and Windows. “We have named this malware Kobalos for its tiny code size and many tricks; in Greek mythology, a kobalos is a small, mischievous creature,” explains Marc-Etienne Léveillé, who investigated the malware. “It has to be said that this level of sophistication is only rarely seen in Linux malware.” Kobalos is a backdoor containing broad commands that don’t reveal the intent of the attackers. It grants remote access to the file system, provides the ability to spawn terminal sessions, and allows proxying connections to other Kobalos-infected servers, Léveillé notes. Any server compromised by Kobalos can be turned into a Command & Control (C&C) server by the operators sending a single command. As the C&C server IP addresses and ports are hardcoded into the executable, the operators can then generate new Kobalos samples that use this new C&C server. In addition, in most systems compromised by Kobalos, the client for secure communication (SSH) is compromised to steal credentials.


Disrupting the patent ecosystem with blockchain and AI

Applying the power of AI and blockchain to IP assets enables a paradigm shift in how IP is understood and managed. Companies that understand and adopt this new paradigm will be rewarded. Last year, we announced the inclusion of IPwe — the world’s first AI and blockchain-powered patent platform, among our selection of the next wave of enterprise blockchain business networks. The Paris-based start-up has since deployed a suite of leading-edge IP solutions, removing barriers by addressing fundamental issues within today’s patent ecosystem. IPwe is partnering with IBM to accelerate its mission to address the inefficiencies in the patent marketplace. IBM Cloud and IBM Blockchain teams are working closely with IPwe on a multi-year project to assist IPwe in its mission to deliver world class solutions to its enterprise, SME, university, law firms, research institutions and government customers, with a heavy emphasis on meeting the needs of financial, technology and risk management executives. In addition to giving patent owners tools that provide greater visibility, effective management, and ease of conducting transactions with patents, the IPwe Platform reduces costs for innovators, and creates commercial opportunities for those that wish to partner or engage in financial transactions.


Low-Code Platforms and the Rise of the Community Developer: Lots of Solutions, or Lots of Problems?

Most community developers will progress through three stages as they become more capable of using the low-code platform. Many community developers won’t progress beyond the first or second stage but some will go onto the third stage and build full-featured applications used throughout your business. Stage 1—UI Generation: Initially they will create applications with nice user interfaces with data that is keyed into the application. For example, they may make a meeting notes application that allows users to jointly add meeting notes as a meeting progresses. This is the UI Generation stage. Stage 2—Integration: As users gain experience, they’ll move to the second stage where they start pulling in data from external systems and data sources. For example, they’ll enhance their meeting notes application to pull calendar information from Outlook and email attendees after each meeting with a copy of the notes. This is the Integration stage. Stage 3—Transformation: And, finally, they’ll start creating applications that perform increasingly sophisticated transformations. For example, they may run the meeting notes through a machine learning model to tag and store the meeting content so that it can be searched by topic. This is the Transformation stage.

XOps: Real or Hype?

Like DevOps, the various types of Ops aim to accelerate processes and improve the quality of what they're delivering: software (DevOps); data (DataOps); AI models (MLOps); and analytics insights (AIOps). Some consider the different Ops types important since the expertise required for each type differs. Others believe it's just hype, specifically relabeling what already exists and/or there's a risk that the fragmentation created by all the different groups may create extra bureaucracy that frustrates faster value delivery. Agile software development practices have been bubbling up to the business for some time. Since the dawn of the millennium, business leaders have been told their companies need to be more agile just to stay competitive. Meanwhile, many agile software development teams have adopted DevOps and increasingly they've gone a step further by embracing continuous integration/continuous delivery (CI/CD) which automates additional tasks to enable an end-to-end pipeline which provides visibility throughout and smoother process flows than the traditional waterfall handoffs. Like DevOps, DataOps, MLOps, and AIOps are cross-functional endeavors focused on continuous improvement, efficiency and process improvement.


Sigma Rules to Live Your Best SOC Life

In the Security Operations space, we have been using SIEM's for many years with varying degrees of deployments, customization, and effectiveness. For the most part, they have been a helpful tool for Security Operations. But they can be better. Like any tool, they need to be sharpened and used correctly. After a while, even a sharpened tool can become dull from too much use: and with a SIEM that takes the form of too many events creating the dreaded ALERT FATIGUE!!! This is real for security operations and must be addressed; because the more alerts, the more an engineer must work on, and the more they will miss. Insert Sigma Rules for SIEMS (pun intended); a way for Security Operations to implement standardization into the daily tasks of building SIEM queries, managing logs, and threat hunting correlations. What is a Sigma rule, you may ask? A Sigma rule is a generic and open, YAML-based signature format that enables a security operations team to describe relevant log events in a flexible and standardized format. So, what does that mean for security operations? Standardization and Collaboration are now more possible than ever before with the adoption of Sigma Rules throughout the Security Operations community. 


How AI Is Radically Changing Cancer Prediction & Diagnosis

Risk modelling includes assessing risks at different time points, which can determine the preventive measures that need to be taken at different stages. This can provide insight into the risk of developing cancer at a time point compared to the other, which is not useful. Hence, scientists trained Mirai to have an ‘additive hazard layer’. This layer can predict a patient’s risk at a time point, let’s say four years, as an extension of the risk at a previous time point, say three years, instead of comparing two different time points. This can help the model learn to make self-consistent risk assessments even with variable amounts of follow-ups as inputs. Secondly, the model includes non-image risk factors like age and hormonal variables but does not necessarily require them at the test time, since a trained network can extract this information from mammograms. Hence, this model can be adopted globally. Lastly, standard training models do not work even with minor variations, such as a change in the mammography machine used. Mirai used an ‘adversarial’ scheme, to de-bias such models to learn from mammogram representations agnostic to the source clinical environment.


How To Port Your Web App To Microsoft Teams

While there are many different paths to building and deploying Teams apps, one of the easiest is to integrate your existing web apps with Teams through what is called “tabs.” Tabs are basically embedded web apps created using HTML, TypeScript (or JavaScript), client-side frameworks such as React, or any server-side framework such as .NET. Tabs allow you to surface content in your app by essentially embedding a web page in Teams using <iframes>. The application was specifically designed with this capability in mind, so you can integrate existing web apps to create custom experiences for yourself, your team, and your app users. One useful feature about integrating your web apps with Teams is that you can pretty much use the developer tools you’re likely already familiar with: Git, Node.js, npm, and Visual Studio Code. To expand your apps with additional capabilities, you can use specialized tools such as the Teams Yeoman generator command line tool or Teams Toolkit Visual Studio Code extension and the Microsoft Teams JavaScript client SDK. They allow you to retrieve additional information and enhance the content you display in your Teams tab.


How AI Can Read Your Brain Waves

The music study is only one of many recent efforts to understand what people are thinking using computers. The research could lead to technology that one day would help people with disabilities manipulate objects using their minds. For example, Elon Musk’s Neuralink project aims to produce a neural implant that allows you to carry a computer wherever you go. Tiny threads are inserted into areas of the brain that control movement. Each thread contains many electrodes and is connected to an implanted computer. "The initial goal of our technology will be to help people with paralysis to regain independence through the control of computers and mobile devices," according to the project’s website. "Our devices are designed to give people the ability to communicate more easily via text or speech synthesis, to follow their curiosity on the web, or to express their creativity through photography, art, or writing apps." Brain-machine interfaces might even one day help make video games more realistic. Gabe Newell, the co-founder and president of video game giant Valve, said recently that his company is trying to connect human brains to computers. The company is working to develop open-source brain-computer interface software, he said. 


Q&A: Dataiku VP discusses AI deployment in financial services

AI is also a real revolution within risk assessment, notably through the enhanced use of alternative data. This is true both for traditional risks and emerging risks such as climate change, helping all financial players — banks and insurers alike — to reconsider how they price risks. Those who have developed a strong expertise in leveraging alternative data and agile modeling have been able to truly benefit from their investment during the ongoing health crisis, which has deeply challenged traditional models. Lastly, the positive impact of AI on customers should not be underestimated. Financial services are confronted with an aggressive competitive landscape as well as demand from customers for improved personalisation, driving improved customer orientation in these organisations. The capacity to build 360° customer views and optimise customer journeys, notably on claims management, are two examples of areas where AI has significantly supported deep transformation within banks and insurance companies, with yet much more to be delivered.



Quote for the day:

"Leadership is a potent combination of strategy and character. But if you must be without one, be without the strategy." -- Norman Schwarzkopf

Daily Tech Digest - February 02, 2021

The Chaos Mindset: Teaching Your Code to Cope

Like Agile, chaos engineering is more than a set of activities and workflows—it’s also a state of mind. Your people and your culture must be ready and able to adopt chaos principles, as well as chaos processes. For the DevOps leader, adopting a new mindset might sound a little, well, vague. But this shift is based on concrete actions, not just philosophical musings. Consider an example from the world of cloud infrastructure: a mission-critical application that is hosted within a cloud service could be at risk for failure if, say, that cloud service is centralized in a single location, or within a limited number of microservices within the cloud infrastructure. But if the app is hosted in a distributed way, you can create greater opportunity for application-level availability and resilience, and you can test for that resilience within the existing production environment. This kind of distributed architecture isn’t brand-new for most enterprises, and, therefore, the process of developing applications in way that tests for availability in a variety of infrastructure scenarios also shouldn’t be a foreign concept. As a DevOps leader, you can build a culture of resilience-centric thinking by empowering your teams with the tools they need to adopt chaos-style testing, and then showing them how to build that thinking into every sprint and every standup.


Intel Outside: How The Chip Giant Lost Its Edge

For Intel, the year 2020 was a roller coaster ride. The company saw more lows than highs. If Apple delivered the much dreaded news to the company, its rivals— NVIDIA and AMD chipped in with more bad news with mega acquisitions and advancements in technology. Intel’s woes didn’t end there. Last year, rockstar chip architect Jim Keller, who was hired to put Intel on top again, resigned after a brief stint at the company; this is Keller’s shortest tenure compared to his time at Apple and Tesla. Then there was Chief Engineer Venkata Murthy Renduchintala, who promised in 2019, that the Intel’s next gen 7nm chips were on track to start production in 2021. That didn’t happen. Intel parted ways with Renduchintala as part of a technical team shake up. Constant engineering hiccups and internal debates of whether Intel needs to outsource manufacturing further delayed the arrival of next gen CPUs. The top brass of the company moving in and out also signals Intel’s leadership vulnerabilities. Current chief Bob Swan who will be replaced soon, was also only appointed a couple of years ago. Swan was tasked with restructuring the company to adjust to the disrupting technologies like AI and cloud.


North Korea-Sponsored Hackers Attack with Bad-Code Visual Studio Projects

Microsoft reported a battle with North Korean-sponsored hackers who attacked security researchers with a most innovative technique: compromised Visual Studio projects. The attack was attributed to a group called ZINC, said to be associated with the Democratic People's Republic of Korea (DPRK). A Jan. 28 post titled "ZINC attacks against security researchers" described the organization as a DPRK-affiliated and state-sponsored group. That determination was based on "observed tradecraft, infrastructure, malware patterns, and account affiliations." "This ongoing campaign was reported by Google’s Threat Analysis Group (TAG) earlier this week, capturing the browser-facing impact of this attack," Microsoft said. "By sharing additional details of the attack, we hope to raise awareness in the cybersecurity community about additional techniques used in this campaign and serve as a reminder to security professionals that they are high-value targets for attackers." While such battles between hackers and enterprises and security organizations are obviously common and ongoing, one unusual aspect of this encounter was the choice of payloads for the bad code.


AI Ethics Really Come Down To Security

Innovating trustworthy AI/ML depends on the design, development and distribution of AI systems that learn from and work collaboratively with humans in a comprehensive and meaningful fashion. It's critical for security and privacy to be considered at the start of any new technology's architecture. They cannot be properly included as an afterthought; the absolute highest required level of security and protection of data must be incorporated in both hardware and software, which will ensure that it is already configured into all steps of the development and supply chain — beginning with design all the way through to the technology's business and utilization model. The Charter of Trust initiative for IoT cybersecurity (of which we're a partner) has also provided excellent guidelines for a risk-based methodology and verification that should be incorporated as core requirements throughout that supply chain. After we identify the core principles that will govern AI development, we must then determine how to ensure these ethical AI systems are not compromised. Machine learning can monitor data and pinpoint anomalies, but it unfortunately also can be used by hackers to increase the impact of their actual cyberattacks.


Use social design to help your distributed team self-organize

For those on the front lines, a restructuring can feel more like something done to them than with them. Managers might overlook the experience and insights of those expected to innovate, collaborate, and satisfy customers within the new structure. And there is often an explicit or implicit power dynamic that distorts functional considerations as executives jostle for control of prominence and resources. An alternative to the top-down approach is to let function drive form, supporting those most directly connected to creating value for customers. Think of it as bottom-up or outside-in. One discipline useful in such efforts is social design, a subspecialty of design that aspires to solve complex human issues by supporting, facilitating, and empowering cultures and communities. Its practitioners design systems, not simply beautiful things. I spoke with one of the pioneers in this area, Cheryl Heller, author of The Intergalactic Design Guide: Harnessing the Creative Potential of Social Design. Her current work at Arizona State University centers on integrating design thinking and practice into functions that don't typically utilize design principles. “People’s work is often their only source of stability right now,” she told me. “You have to be careful, because people are brittle.” 


How-to improve Wi-Fi roaming

The initial tendency may be to install more APs in hopes of finding an easy fix, but doing so without careful analysis can make the situation even worse. Proper roaming requires more than just good signal strength throughout coverage areas; it takes a careful balance between the coverage of each AP on both 2.4 and 5GHz bands to make roaming work right. ... Getting the coverage overlap just right between all the APs in your network is one of the most important things you can do to help improve the roaming. At the same time, it is one of the toughest. You have to check the coverage throughout the coverage areas and analyze the overlapping. If issues are found you need to figure out how to address them, perform the fix, and then double-check that it’s actually fixed. Keep in mind you want about a 15% to 20% coverage overlap between AP cells, using -67dBm as the signal boundary for each cell. You want to look at both bands, too, keeping in mind 2.4GHz naturally provides longer range than 5GHz. Less overlap can result in spots with bad signals. If you have too much overlap between AP cells in either band, it can cause co-channel interference and “sticky” clients that don’t roam, which can result in APs that become overloaded with clients.


UK's leading AI startup and scaleup founders highlight the main pain points of running an AI business

Looking specifically at financial institutions, Hodgson says that they must ensure that their data foundations are fit for purpose. “Data is the raw material of our industry, and without it, the benefits and potential of AI are stunted and capped before the system even gets switched on. Many financial institutions already sit atop mountains of their own data in addition to buying more from vendors — yet they do not have the time, the resources or the staff expertise to sift through it,” Hodgson explains. Dr Richard Ahlfeld, founder and CEO at Monolith AI — a startup that builds new machine learning software to help engineers to improve the product development process, echoes this view. He says: “Any pain points tend to boil down to the data: getting the data, ensuring data security, making sure that you can trust the data. “There’s no standardisation of what makes data ‘valuable’ across the industry either, and not all engineers follow the same protocols and practices. For example, deciding what data to keep can be tricky as it’s hard to anticipate what might or might not be useful to have in the future. Even saving data from failed ventures (a practice which is often overlooked) can have its value, as it acts as a reference for future experiments.”


Ransomware payments are going down as more victims decide not to pay up

While it's positive that a higher percentage of these victims are choosing not to pay cyber criminals, there's still a large number of organisations that do give in – allowing ransomware to continue to be successful, even if those behind attacks have been making slightly less money. However, it might be enough for some ransomware operators to consider if the effort is worth it. "When fewer companies pay, regardless of the reason, it causes a long-term impact, that compounded over time can make a material difference in the volume of attacks," said a blog post by Coveware. The rise in organisations choosing not to give into extortion tactics around ransomware has also led the gangs to change their tactics, as shown by the increase in ransomware attacks where criminals threaten to leak stolen data if the victim doesn't pay. According to Coveware, these accounted for 70% of ransomware attacks in the final three months of 2020 – up from 50% during the previous three months. However, while almost three-quarters of organisations threatened with data being published between July and September paid ransoms, that dropped to 60% for organisations who fell victim between October and December.


Measuring Crop Health Using Deep Learning – Notes From Tiger Analytics

Agrochemical companies are already experimenting with advanced data science techniques to overcome these challenges: they employ drones to capture high-resolution aerial images of the farms and apply computer vision techniques and other complex algorithms to process the images. However, challenges persist; leaf characteristics such as orientation, alignment, length, shape and twists are difficult to discern when viewed from above, particularly in crops that grow tall and narrow, such as maise. Further complexities are introduced by variability in ambient light conditions, soil terrain, cloud refraction, occlusion and other environmental factors. Finally, all these factors vary over time, which means that to get a clear picture of plant health and treatment performance, regular measurement is required. As deep learning and computer vision fields mature, scientists are beginning to use these technologies for such LAI measurements, and more. Tiger Analytics has collaborated with leading agrochemical companies to develop such solutions. In this article, we outline the possible approaches and challenges. The primary challenge in developing a deep learning solution is the near nonexistence of training data.


Contemporising Data Protection Legislation

Provisioning blanket exemption to government agencies from the application of the data protection law and processing obligations (Section 35, PDP Bill) poses a challenge to reforming and upgrading the data access and surveillance regime. The importance of procedural safeguards, the right to effective recourse, and necessary and proportionate access principles has been reiterated by numerous Supreme Court judgments like PUCL v. Union of India and K.S. Puttaswamy v. Union of India. Such an exemption might inadvertently curtail the government’s stated vision of becoming the data processing and analytics hub of the world, and dent digital economy goals. According to the updated draft of the Standard Contractual Clauses (SCCs) by the European Commission on personal data transfers outside the European region, data exporters must take into account the laws and overall regime that enable public authorities to access personal data through binding requests in the destination country, and gauge if they meet “necessary and proportionate” requirements expected from a “democratic society”. If governments and businesses find the exemption under Section 35 of the PDP Bill excessive, digital trade and investments, and the ability to forge agreements, might be impacted.



Quote for the day:

"Trust is one of the greatest gifts that can be given and we should take creat care not to abuse it." --Gordon Tredgold