Daily Tech Digest - September 06, 2020

Crypto-Friendly Banking Platform Cashaa Expanding in India, US, Africa

India’s cryptocurrency market has been growing rapidly ever since the country’s supreme court quashed the RBI circular that banned financial institutions from providing services to crypto businesses. India currently does not have any direct crypto regulations, but there are rumors of the government discussing the bill submitted by the inter-ministerial committee headed by former Finance Secretary Subhash Chandra Garg, which seeks to ban cryptocurrencies like bitcoin. However, the Indian crypto industry firmly believes that this bill is outdated and will not be the one the government introduces. “The Indian government is currently engaging with various stakeholders and trying to work out a solution. India today stands at a juncture, where it can actually embrace the digital currency ecosystem as it is pushing for the digital revolution and is leading the way in the fintech segment,” Gaurav opined. Cashaa will also focus on the U.S. next year, the CEO explained. “We have already started issuing USD accounts regulated by the Banking Division of Colorado to our existing business customers as beta users,” he further shared with news.Bitcoin.com, adding that some crypto clients already using Cashaa’s USD accounts include Nexo, Coindcx, and Unocoin.


Surging CMS attacks keep SQL injections on the radar during the next normal

Sending malicious commands to a web application can result in disclosure of users’ private data, and the attacker can gain access to a user’s computer. This method of injecting code within the same local execution infrastructure is relatively easy when compared to remote injection, which requires more specialized tools and skills. Here, the remote hacker only needs a security flaw that offers a small window to send commands to the remote execution environment, enabling the malicious code to run without any evaluation. As a result, attackers can create a remote entrance to reach the target environment, and oftentimes the administrator has no knowledge of the system being compromised. Most of the time, attackers make use of remote code execution security flaws that are on the web surface or within different narrow-use and specific ports and protocols. When a CMS is attacked, the remote code execution flaw often results from a connected platform such as the .NET environment, PHP scripting language, or file-sharing service or database that has remote code execution vulnerabilities.


Malware gang uses .NET library to generate Excel docs that bypass security checks

NVISO says the Epic Manchego gang appears to have used EPPlus to generate spreadsheet files in the Office Open XML (OOXML) format. The OOXML spreadsheet files generated by Epic Manchego lacked a section of compiled VBA code, specific to Excel documents compiled in Microsoft's proprietary Office software. Some antivirus products and email scanners specifically look for this portion of VBA code to search for possible signs of malicious Excel docs, which would explain why spreadsheets generated by the Epic Manchego gang had lower detection rates than other malicious Excel files. This blob of compiled VBA code is usually where an attacker's malicious code would be stored. However, this doesn't mean the files were clean. NVISO says that the Epic Manchego simply stored their malicious code in a custom VBA code format, which was also password-protected to prevent security systems and researchers from analyzing its content. But despite using a different method to generate their malicious Excel documents, the EPPlus-based spreadsheet files still worked like any other Excel document.


American Express Establishes Data Analytics, Risk & Technology Lab (DART) In IIT Madras

The company hopes to apply these technologies across dimensions such as employee engagement and attention, evaluating and enhancing the quality of education and learning in school. The Lab at IIT Madras will explore a range of verticals with key emphasis on manufacturing, finance, healthcare, operations management and smart cities. “Our collaboration with IIT Madras reiterates our commitment to support and invest in interventions for public good in the country. The technologies and applied sciences R&D in the Lab will be beneficial for creating an overall societal impact through advancement in financial services, healthcare and safety standards,” said Bharathram Thothadri, EVP and Chief Credit Officer, American Express. It also plans to build talent for industry by partnering with academia while promoting talent and diversity in technology. It has also announced annual scholarships for economically-disadvantaged and meritorious students, including ‘Ambition Awards’ for deserving women students at IIT Madras.


Observability Strategies for Distributed Systems - Lessons Learned

All the panelists said some variation of, "make the easiest path the correct path," with Fong-Jones observing that, "teams are super lazy." Because most teams are focused on developing their service, find ways to create automatic dashboards and update runbooks. Spoons emphasized the need to create machine-readable central documentation. Similarly, using structured logging makes information digestible. That can greatly aid looking for patterns. One of the behaviors to encourage is being able to form and test hypotheses. Having all the data from across a distributed system can become overwhelming, so you need ways to narrow your focus. The practice of site-reliability engineering requires a different mindset than "ordinary" software engineering. Although DevOps has been an attempt to apply software engineering to IT operations, SRE takes an opposite approach when thinking about failure. This can be thought of as the duality between monitoring, which is looking for what is anticipated, and observing, where the focus is on what is unexpected. Each of the panelists had a few pitfalls that they've seen, and hope people will avoid. 


Traditional Banking is an Endangered Species

For banks to survive in a post-COVID-19 world they must review their risk modelling strategies to accommodate the pandemics of the future, rather than falling back to what they know once COVID-19 has been contained. Banks need to ensure that remote working can be provisioned for effectively, in the event of another pandemic, and need to abandon paper processes all together. All of this is easier said than done and banks must spend time on ensuring they are effectively communicating across the entire workforce. For years, banks have been grappling with siloed data and now they must ensure they do not have siloed communications – where time and money could be lost if the workforce are not kept in the loop across the front end e.g. products, solutions and services, and the back end e.g. banking architecture. By harnessing the payments ecosystem, banks can collaborate with technology specialists, to keep up with the pace of demand for international, online payments. ‘Open Banking’ will enable banks to access the right technological expertise to solve the challenges they are facing on a daily basis, and provision fully for the needs of their new, existing and prospective customers.


Cybersecurity Pros Face a Huge Staffing Shortage As Attacks Surge During The Pandemic

Shearer said to fill the talent gap, more outreach needs to be done to recruit younger workers into the aging workforce, as well as more diverse cybersecurity workers. “Diversity is a big part of it — women are underrepresented, it’s improving. We also here in the United states need to look at other underrepresented minority groups and get them into the fold because it’s going to take everyone we can find to be interested in cyber,” he said. “As people start to retire, it’s only going to exacerbate the fact that it’s an undersized cyber workforce.” Jobs can be lucrative in the field as well—(ISC)2′s data finds the average North American salary for cybersecurity professionals is $90,000 a year and those who hold security certifications can make more. ... Hiring has become somewhat easier in recent months, Wysopal says, a silver lining in the face of a broader skilled talent shortage in the industry. As the pandemic forced closures and layoffs in all sectors of the economy, more cyber workers have become available and due to the nature of remote work, candidates that are outside of the area have become more appealing.


SASE vs SD-WAN: A Comparison

SASE’s focus is on providing secure access to distributed resources for the network and its users. The resources can be distributed in private data centers, colocation facilities, and the cloud. As such, security and networking decision-making are baked into the same security tools. SASE products have security tools that reside in a user’s device as a security agent, as well as in the cloud as a cloud-native software stack. For example, the security agent can contain a secure web gateway and a vendor’s cloud can contain a firewall-as-a-service. In a branch office or other location with a collection of people, a SASE appliance is common in order to secure agentless devices like printers. SD-WAN technology was not designed with a focus on security. SD-WAN security is often delivered via secondary features or by third-party vendors. While some SD-WAN solutions do have baked-in security, this is not in the majority. SD-WAN’s central goal is to connect geographically separate offices to each other and to a central headquarters, with flexibility and adaptability to different network conditions. In an SD-WAN, security tools are usually located at offices in CPE rather than on devices themselves. 


3 Predictions For The Role Of Artificial Intelligence In Art And Design

Until we can fully understand the brain’s creative thought processes, it’s unlikely machines will learn to replicate them. As yet, there’s still much we don’t understand about human creativity. Those inspired ideas that pop into our brain seemingly out of nowhere. The “eureka!” moments of clarity that stop us in our tracks. Much of that thought process remains a mystery, which makes it difficult to replicate the same creative spark in machines. Typically, then, machines have to be “told” what to create before they can produce the desired end result. The AI painting that sold at auction? It was created by an algorithm that had been trained on 15,000 pre-20th century portraits, and was programmed to compare its own work with those paintings. ... Intelligent machines have no problem coming up with infinite possible solutions and permutations, and then narrowing the field down to the most suitable options – the ones that best fit the human creative’s “vision”. In this way, machines could help us come up with new creative solutions that we couldn’t possibly have come up with on our own.


Eight case studies on regulating biometric technology show us a path forward

The clearest one was the chapter on India by Nayantara Ranganathan, and the chapter on the Australian facial recognition database by Monique Mann and Jake Goldenfein. Both of these are massive centralized state architectures where the whole point is to remove the technical silos between different state and other kinds of databases, and to make sure that these databases are centrally linked. So you’re creating this monster centralized, centrally linked biometric data architecture. ... The second—and this is a lesson that we keep repeating—consent as a legal tool is very much broken, and it’s definitely broken in the context of biometric data. But that doesn’t mean that it’s useless. Woody Hartzog’s chapter on Illinois’s BIPA [Biometric Information Privacy Act] says: Look, it’s great that we’ve had several successful lawsuits against companies using BIPA, most recently with Clearview AI. But we can’t keep expecting “the consent model” to bring about structural change. Our solution can’t be: The user knows best; the user will tell Facebook that they don’t want their face data collected.



Quote for the day:

"The gem cannot be polished without friction, nor people perfected without trials." -- Confucius

Daily Tech Digest - September 05, 2020

A virtuous cycle: how councils are using AI to manage healthier travel

With local lockdowns being a new threat, councils face fresh calls to gather and understand social distancing requirements. This isn’t just in large towns and cities; local authorities need to be able to assess and understand risk across broader geographical areas to keep people safe. More small towns and villages are already installing cameras and sensors (or upgrading their current infrastructure) to capture data in their streets to identify places where people struggle to social distance. The city of Oxford, too, has implemented a large scale deployment of cycling specific sensors. Councils and other local authorities are taking their responsibilities seriously. Aiding all this is AI. Artificial intelligence can underpin a council’s strategy for coping with the Active Travel boom. In practical terms, this means positioning cameras at busy junctions, on popular footpaths, and around town and city centres, then analysing what those cameras see. It’s not just a numbers game, although knowing with confidence how many people are travelling in a certain area on a given day will certainly be useful. AI can quickly identify where social distancing is struggling to be adhered to due to road or path layout, and spot dangerous behaviour such as undertaking or cyclists riding on pavements.


Inclusion And Ethics In Artificial Intelligence

The computer science and Artificial Intelligence (AI) communities are starting to awaken to the profound ways that their algorithms will impact society and are now attempting to develop guidelines on ethics for our increasingly automated world. The systems we require for sustaining our lives increasingly rely upon algorithms to function. More things are becoming increasingly automated in ways that impact all of us. Yet, the people who are developing the automation, machine learning, and the data collection and analysis that currently drive much of this automation do not represent all of us and are not considering all our needs equally. However, not all ethics guidelines are developed equally — or ethically. Often, these efforts fail to recognize the cultural and social differences that underlie our everyday decision making and make general assumptions about both what a “human” and “ethical human behavior”. As part of this approach, the US federal government launched AI.gov to make it easier to access all of the governmental AI initiatives currently underway. The site is the best single resource from which to gain a better understanding of the US AI strategy.


Our quantum internet breakthrough could help make hacking a thing of the past

Our current way of protecting online data is to encrypt it using mathematical problems that are easy to solve if you have a digital “key” to unlock the encryption but hard to solve without it. However, hard does not mean impossible and, with enough time and computer power, today’s methods of encryption can be broken. Quantum communication, on the other hand, creates keys using individual particles of light (photons) , which – according to the principles of quantum physics – are impossible to make an exact copy of. Any attempt to copy these keys will unavoidably cause errors that can be detected. This means a hacker, no matter how clever or powerful they are or what kind of supercomputer they possess, cannot replicate a quantum key or read the message it encrypts. This concept has already been demonstrated in satellites and over fibre-optic cables, and used to send secure messages between different countries. So why are we not already using in everyday life? The problem is that it requires expensive, specialised technology that means it’s not currently scalable. Previous quantum communication techniques were like pairs of children’s walkie talkies. 


Two Tools Every Data Scientist Should Use For Their Next ML Project

One of the key value propositions of data management is to deliver data to internal and external stakeholders in required quality for different purposes. Data management sets up data value chains that turn raw data into meaningful information. Different data management capabilities should enable data value chains. The core data management capabilities taken into the “Orange” model are data modeling, information systems architecture, data quality, and data governance. In Figure 1, they are marked orange. These capabilities are performed by data management professionals. Other capabilities that belong to other domains like IT, security, and other business support functions. To implement a data management capability, a company should establish a formal data management function. The data management function will become operational by implementing four key components that enable data management capability such as processes, roles, tools, and data ... To make the evidence objective, it should be measurable. This is the second criterion. For example, you can prove your progress by demonstrating the number of data quality issues resolved within a specified period. You should also compare the planned and achieved resolved issues.


The fourth generation of AI is here, and it’s called ‘Artificial Intuition’

The fourth generation of AI is ‘artificial intuition,’ which enables computers to identify threats and opportunities without being told what to look for, just as human intuition allows us to make decisions without specifically being instructed on how to do so. It’s similar to a seasoned detective who can enter a crime scene and know right away that something doesn’t seem right, or an experienced investor who can spot a coming trend before anybody else. The concept of artificial intuition is one that, just five years ago, was considered impossible. But now companies like Google, Amazon and IBM are working to develop solutions, and a few companies have already managed to operationalize it. So, how does artificial intuition accurately analyze unknown data without any historical context to point it in the right direction? The answer lies within the data itself. Once presented with a current dataset, the complex algorithms of artificial intuition are able to identify any correlations or anomalies between data points.  Of course, this doesn’t happen automatically. First, instead of building a quantitative model to process the data, artificial intuition applies a qualitative model.


Vulnerability Management: Is In-Sourcing or Outsourcing Right for You?

While size is a factor, both small and large companies can benefit from leveraging the expertise of a partner. Small companies can get enterprise-level services for a fraction of the cost of supporting full time employees; large companies can relieve their IT departments of time-consuming tasks and still save money. This allows for both to focus on their core competencies – the outsource provider brings platform and process expertise to the table to help guide program maturity while handling the grind of scanning, analysis and reporting. This frees up the customer organization to focus on operating their business and handling strategic technology initiatives. A qualified third-party company that specializes in VM already has the certified security professionals on board who are not only up to speed with the latest threats, but always use the most effective detection tools and are in the loop of important new information. If you answered in the affirmative to outsourcing VM, you’ll want to know how to select a company that is truly going to help you shore up the weaknesses in your defenses. First, you want one that has years of experience protecting businesses and offers dedicated support 24/7. 


How to drive business value through balanced development automation

Operationally, challenges stem from misalignment in understanding who the end customer really is. Companies often design products and services for themselves and not for the end customer. Once an organization focuses on the end user and how they are going to use that product and service, the shift in thinking occurs. Now it’s about looking at what activities need to be done to provide value to that end customer. Thinking this way, there will be features, functions, and processes never done before. In the words of Stephen Covey, “Keep the main thing the main thing”. What is the main thing? The customer. What features and functionality do you need for each of them from a value perspective? And you need to add governance to that. Effective governance ensures delivery of a quality product or service that meets your objectives without monetary or punitive pain. The end customer benefits from that product or service having effective and efficient governance. That said, heavy governance is also waste. There has to be a tension and a flow or a balance between Hierarchical Governance and Self Governance where the role of every person in the organization is clearly aligned in their understanding of value contributed to the end customer.


Microservices Governance and API Management

Different microservices teams can have their own lifecycle definitions and different user roles to manage the lifecycle state transfer. That allows teams to work autonomously. At the same time, WSO2 digital asset governance solution allows these teams to create custom lifecycles and attach them to the services that they implement. As part of that, there can be roles that verify the overall governance across multiple teams by making sure that everyone follows the industry best practices that are accepted by the business. As an example, If the industry best practice is to use Open API Specification for API definitions, every microservices team needs to adhere to that standard since it is technology-neutral. At the same time, teams should have the autonomy to select the programming language and the libraries used in their development. Another key aspect of design-time governance is the reusable aspect. Given that microservices teams are stemmed from ideas, there can be situations where certain services that are required to retrieve data for this new microservices implementation is already available via a service developed by another team.


Why Observability Is The Next Big Thing In Security

Cloud-native infrastructures and security observability are purposefully designed to remove the security speed bumps that slow innovation down, and instead, leverage a security guardrails approach that supports even faster software integration and delivery. Developers may then focus on serving the customer when they have tailored observability available—driven by automated security feedback cycles—so teams can quickly learn from mistakes and rapidly deliver value and innovation to customers. Optimizing customer experiences on the fly, for example, is just one cloud-native advantage made possible by event-driven architectures (EDAs). DevOps teams are now smartly requiring embedded security context across the development life cycle in order to understand what is going on and to help automate security of their cloud-delivered applications. Any migration into application programming interface (API) and event-driven architectures like cloud-native environments can enjoy the benefits paid forward from preexisting, automated, observable security deployed across your application development life cycle.


Why some artificial intelligence is smart until it's dumb

While practical uses get the most attention, machine learning also offers advantages for basic scientific research. In high-energy particle accelerators, such as the Large Hadron Collider near Geneva, protons smashing together produce complex streams of debris containing other subatomic particles (such as the famous Higgs boson, discovered at the LHC in 2012). With bunches containing billions of protons colliding millions of times per second, physicists must wisely choose which events are worth studying. It’s kind of like deciding which molecules to swallow while drinking from a firehose. Machine learning can help distinguish important events from background noise. Other machine algorithms can help identify particles produced in the collision debris. “Deep learning has already influenced data analysis at the LHC and sparked a new wave of collaboration between the machine learning and particle physics communities,” physicist Dan Guest and colleagues wrote in the 2018 Annual Review of Nuclear and Particle Science. Machine learning methods have been applied to data processing not only in particle physics but also in cosmology, quantum computing and other realms of fundamental physics, quantum physicist Giuseppe Carleo and colleagues point out in another recent review.



Quote for the day:

"You do not lead by hitting people over the head. That's assault, not leadership." - Dwight D. Eisenhower

Daily Tech Digest - September 04, 2020

Blockchain for Master Data Management

What is the relevance of Blockchain for MDM? Blockchain is a type of a database – through quite different from traditional relational or emerging NoSQL databases. As highlighted in the podcast, Blockchain is a linked list of blocks that contain cryptographically secured blocks of transactions that are immutable. Participants who do not know or trust each other can rely on and trust the Blockchain. Unlike traditional databases that support CRUD (Create, Read, Update, and Delete), with Blockchain, you can only Create and Read: transactions are validated and added to the blocks in the chain. They can be read but never deleted or updated. All transactions and activities on the Blockchain are timestamped. So, what is the relevance of Blockchain for MDM when we cross organizational boundaries. Conducting business transactions across organizational boundaries has all the challenges of intra-enterprise silos and adds several others. Inter-Enterprise exchanges and data sharing are marred with multiple inefficiencies: manual forms and paperwork, error-prone replications, delays due to organizational or bureaucratic inefficiencies, errors in language translations, especially cross-country exchanges, difficulties, and challenges in reconciling governance policies – to name a few.


Everything you need to know about the weird future of quantum networks

QKD technology is in its very early stages. The "usual" way to create QKD at the moment consists of sending qubits in a one-directional way to the receiver, through optic-fibre cables; but those significantly limit the effectiveness of the protocol. Qubits can easily get lost or scattered in a fibre-optic cable, which means that quantum signals are very much error-prone, and struggle to travel long distances. Current experiments, in fact, are limited to a range of hundreds of kilometers. There is another solution, and it is the one that underpins the quantum internet: to leverage another property of quantum, called entanglement, to communicate between two devices. When two qubits interact and become entangled, they share particular properties that depend on each other. While the qubits are in an entangled state, any change to one particle in the pair will result in changes to the other, even if they are physically separated. The state of the first qubit, therefore, can be "read" by looking at the behavior of its entangled counterpart. That's right: even Albert Einstein called the whole thing "spooky action at a distance". And in the context of quantum communication, entanglement could in effect, teleport some information from one qubit to its entangled other half, without the need for a physical channel bridging the two during the transmission.


Cyber security Career Guidance — Part 1 — the Beginner’s Journey

Logs can seem overwhelming the first time you come across them. But all you must do is confront the bully head-on! In my training workshops, I always throw different log file formats on the screen and ask the students to analyze what’s going on. At first, there’s a typical sigh across the whole class, but soon people begin to interpret the different fields and what they could mean. There are numerous tools out there — some that support multiple log formats, others which do a great job at a specific log format. With experience, you will figure out which tool works best for which type of log format, but nothing beats being able to look at raw logs and not be intimidated. ... while it is not mandatory that you know a programming language, but it helps a lot. During the interview process, unless it is mentioned on your resume, I would not ask about your programming know-how. But from personal experience, I can vouch for the power of programming when solving real-world technical issues. Again, which language you know is not important. Even C is fine. Shell scripting is possibly even better. Python is awesome. In college, we were taught Basic and C. We taught ourselves C++ and Java on the side.


How Google Maps uses DeepMind’s AI tools to predict your arrival time

Google Maps is one of the company’s most widely-used products, and its ability to predict upcoming traffic jams makes it indispensable for many drivers. Each day, says Google, more than 1 billion kilometers of road are driven with the app’s help. But, as the search giant explains in a blog post today, its features have got more accurate thanks to machine learning tools from DeepMind, the London-based AI lab owned by Google’s parent company Alphabet. In the blog post, Google and DeepMind researchers explain how they take data from various sources and feed it into machine learning models to predict traffic flows. This data includes live traffic information collected anonymously from Android devices, historical traffic data, information like speed limits and construction sites from local governments, and also factors like the quality, size, and direction of any given road. So, in Google’s estimates, paved roads beat unpaved ones, while the algorithm will decide it’s sometimes faster to take a longer stretch of motorway than navigate multiple winding streets.


How to Build a Strong Beta Testers Community

Before you start, you should define your goal and target audience. Defining goals is the first task to complete. Here are a few relevant ones: test an idea and gather feedback to make sure you are solving the right problem; test the sketches to make sure you solve the problem right; and test an early version to get feedback and adjust the solution before the official launch. Don’t forget to describe how you understand that you have achieved your goal. For example, if you want to get feedback regarding your product, that’s great. But what if only one user provides their feedback? Does it mean that you have achieved your goal? Make sure you can measure the results so that you are able to achieve your goal. And as with any other goal, don’t forget to revise your goal during your beta program. You may want to adjust it as you go. How much time do you have to dedicate to the beta program? If you do everything manually, then you need to set a maximum number of participants. Think how many contacts (customers) can you serve during the beta. Your beta customers will ask questions, provide feedback, and log the bugs.


How to judge open-source projects

An easier way to determine an open-source program's quality is simply to look at the number and quality of its developers. Mike Volpi, a well-known venture capitalist and Index Ventures partner, said that since "software is never sold," it is adopted by the developers who appreciate the software more because they can see it and use it themselves rather than being subject to it based on executive decisions." Therefore, "open-source software permeates itself through the true experts," and . . . "the developers . . . vote with their feet." If the programmers are leaving, the maintainers aren't getting back on patch requests, and the code is growly moldy, it's time to bid that program good-bye. Or, if it's essential to you, take it over yourself.  You can also determine a project's health by how easy -- or not -- it makes it for others to participate in it. Ed Warnicke, a Cisco Distinguished Consulting Engineer, believes successful open-source communities lower the barriers to useful participation. He lists many barriers to participation, which are red flags. ... Another way of judging open-source projects is how many people actually use them.


Which cybersecurity failures cost companies the most and which defenses have the highest ROI?

SCRAM (Secure Cyber Risk Aggregation and Measurement) has, according to its creators, solved that longstanding cyber-security problem. “SCRAM mimics the traditional aggregation technique, but works exclusively on encrypted data that it cannot see. The system takes in encrypted data from the participants, runs a blind computation on it, and returns an encrypted result that must be unlocked by each participant separately before anyone can see the answer,” they explained. “The security of the system comes from the requirement that the keys from all the participants are needed in order to unlock any of the data. Participants guarantee their own security by agreeing to unlock only the result using their privately held key.” More technical details about the process and the platform, which consists of a central server, software clients, and a communication network to pass encrypted data between the clients and the server, can be found in this paper. ... The researchers recruited seven large companies that had a high level of security sophistication and a CISO to test out the platform, i.e., to contribute encrypted information about their network defenses and a list of all monetary losses from cyber attacks and their associated defensive failures over a two-year period.


Open Service Mesh: a Service Mesh Implementation from Microsoft

Microsoft has released (in alpha) the open service mesh (OSM), a service mesh implementation compliant with the SMI specification. OSM covers standard features of a service mesh like canary releases, secure communication, and application insights, similar to other service mesh implementations like Istio, Linkerd, Consul, or Kuma. Additionally, the OSM team is in the process of donating the project to the CNCF. OSM implements the service mesh interface (SMI), a set of standard and portable APIs to deploy a service mesh in Kubernetes. When users configure a service mesh through SMI specification, they don't need to be specific about which service implementation they're running in the cluster. Additionally, OSM comes with standard and basic service mesh features like canary releases, secure service communication, and application insights. In this alpha release, OSM comes with the ability to configure traffic shifting policies, secure communication within services through mTLS, grained access control policies, application metrics, external certificate managers, and inject the sidecar Envoy proxy automatically.


The Hidden Costs of Losing Security Talent

Ryan Corey, co-founder and CEO of online training site Cybrary, says companies also lose money on staffing when they don't chart a clear career path for their employees. "Every cyber professional has recruiters calling them all the time. That's just the way it is because there are not enough people to fill the available jobs," he says. "When people feel boxed in, they will leave. They have to know what the path is to the next level." Another issue: Companies don't handle diversity well, adds Ron Gula, a board member at Cybrary. "By diversity I mean diversity in employment backgrounds," he says. "Companies may want to hire a pen tester because they have security experience, but they should also be looking for people who have experience in accounting, a legal department, or other types of jobs." Finally, companies don't fund cyber departments well enough, either, Gula says. "Too often there's a lack of leadership, funding, and a vision for what the department could be," he says."Sometimes they outsource and have a bad experience and then move forward with a skeleton crew." CyberVista's Petrella says she works with companies on developing their recruiting and retention strategies, as well as how to upskill the people they recruit.


Businesses, policymakers ‘misaligned’ on what ethical AI really means

Policymakers rated “fairness and avoiding bias”, such as the misidentification of individuals, as the top priority for this application of the technology, followed by “privacy and data rights” and “transparency.” Among private firms, however, the number one concern was different. These companies identified “privacy and data rights” as their number one worry. While this is just one example, experts from EY have remarked that the substantial misalignment in points of view between the public and private sectors poses a huge risk to the business landscape, as a focused approach between the two in relation to ethical AI is absent. Policymakers and firms need to unite and collaborate in truly defining ethical AI and must work together to narrow the existing gap. EY global markets digital and business disruption leader, Gil Forer said, “As AI scales up in new applications, policymakers and companies must work together to mitigate new market and legal risks.” Forer continued: “Cross-collaboration will help these groups understand how emerging ethical principles will influence AI regulations and will aid policymakers in enacting decisions that are nuanced and realistic.”



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren

Daily Tech Digest - September 03, 2020

What is an office for now?

Working from home does work for a lot of people; I’ve been working from home since way before it was cool. But it can be terrible — isolating and uncomfortable, with blurred boundaries that make it too easy to keep working well past “office hours” but equally too easy to drift away from your desk to load the dishwasher. One survey on working from home, conducted by the Institute for Employment Studies in the U.K. early in its lockdown, found that more than half of respondents reported new musculoskeletal complaints, including neck and back pain, while their diet and exercise suffered. Many of them said they slept less and worried more. ... Additionally, asking employees to turn their home into an office makes employers more responsible for what happens there, while simultaneously making it more difficult to assess worker well-being. “I’ve spent a lot of my time making sure that people are OK in a way that you can do very, very swiftly in the office,” Sam Bompas, director at Bompas & Parr, a London-based experience design studio with approximately 20 employees, told me. “In the same way that for children, school provides an important social security function, if there’s anything wrong in [employees’] personal life, the office can do that as well.”


Most IoT Hardware Dangerously Easy to Crack

One of the easiest methods is to gain access to UART, or Universal Asynchronous Receiver/Transmitter, a serial interface used for diagnostic reporting and debugging in all IoT products, among other things. An attacker can use the UART to gain root shell access to an IoT device and then download the firmware to learn its secrets and inspect for weaknesses. "UART is only supposed to be used by the manufacturer. When you get access to it, in most cases you get complete root access," Rogers said. Protecting access to UART, or at least configuring it against interactive access, should be a fairly straightforward task for manufacturers; however, most don't make the effort. "They simply allow you to have complete interactive shell. It is the easiest way to hack every piece of IoT hardware," Rogers noted. Several devices even have UART pin names labeled on the board so it is easy to find the interface. Multiple tools are available to help find them if they are not labeled. Another, only slightly more challenging, route to completely pwning an IoT device is via JTAG, a microcontroller-level interface that is used for multiple purposes including testing integrated circuits and programming flash memory. 


Principles for Microservice Design: Think IDEALS, Rather than SOLID

The goal of interface segregation for microservices is that each type of frontend sees the service contract that best suits its needs. For example: a mobile native app wants to call endpoints that respond with a short JSON representation of the data; the same system has a web application that uses the full JSON representation; there’s also an old desktop application that calls the same service and requires a full representation but in XML. Different clients may also use different protocols. For example, external clients want to use HTTP to call a gRPC service. Instead of trying to impose the same service contract (using canonical models) on all types of service clients, we "segregate the interface" so that each type of client sees the service interface that it needs. How do we do that? A prominent alternative is to use an API gateway. It can do message format transformation, message structure transformation, protocol bridging, message routing, and much more. A popular alternative is the Backend for Frontends (BFF) pattern. In this case, we have an API gateway for each type of client -- we commonly say we have a different BFF for each client, as illustrated in this figure.


Ethical and professional data science needed to avoid further algorithm controversies

Identifying weaknesses in the attempts to ensure objectivity, the BCS report also said there is a need for clarity around what information systems are intended to achieve at the individual level, and that this should be established “right at the start” of the process. For example, distributing grades based on the characteristics of different cohorts of students so they are statistically in line with previous years – which is what the Ofqual algorithm did – is different to ensuring each individual student is treated as fairly as possible, something which should have been discussed and understood by all stakeholders from the beginning, it said. In terms of accountability, BCS said: “It is essential to develop effective mechanisms for the joint governance of the design and development of information systems right at the start.” Although it refrained from apportioning blame, it added: “The current exam-grading situation should not be attributed to any single government department or office.” CEO of the RSS, Stian Westlake, however, told Sky News the results fiasco was “a predictable surprise” because of DfE’s demand that Ofqual reduce grade inflation.


Why you shouldn’t mistake AI for automation

AI and automation cannot be mistaken for the same thing—where there’s automation, there is no requirement that artificial intelligence is involved. Indeed, automation has been around for centuries, far longer than we’ve had computers: traditional milling used water wheels to automate manual processes that human labor would otherwise have been required for. Water spins the wheel, which turns the millstone—an automated process that’s decidedly unintelligent. Simple automation has been the cornerstone of many businesses for years. For example, a process of sending out invoices may be automated once inputs into spreadsheets have been confirmed by people in the accounts department. Automation means that machines are replicating human tasks. But AI demands that the machines are also replicating human thinking. This means programming that can reflect on its own procedures and make decisions outside the scope of its own programming. Ultimately, machine learning requires a machine to react dynamically to changing variables. This is a fundamentally different objective to automation, which is essentially about teaching machines to perform repetitive tasks with predictable inputs. For this reason, applying machine learning to any automated process may be a case of overengineering.


Convert PDFs to Audiobooks with Machine Learning

When you look at a research paper, it’s probably easy for you to gloss over the irrelevant bits just by noting the layout: titles are large and bolded; captions are small; body text is medium-sized and centered on the page. Using spatial information about the layout of the text on the page, we can train a machine learning model to do that, too. We show the model a bunch of examples of body text, header text, and so on, and hopefully it learns to recognize them. This is the approach that Kaz, the original author of this project, took when trying to turn textbooks into audiobooks. Earlier in this post, I mentioned that the Google Cloud Vision API returns not just text on the page, but also its layout. ... The book Kaz was converting was, obviously, in Japanese. For each chunk of text, he created a set of features to describe it: how many characters were in the chunk of text? How large was it, and where was it located on the page? What was the aspect ratio of the box enclosing the text (a narrow box, for example, might just be a side bar)? Notice there’s also a column named “label” in that spreadsheet above. That’s because, in order to train a machine learning model, we need a labeled training dataset from which the model can “learn.” 


Zero-trust framework ripe for modern security challenges

Adopting a zero-trust security model is not an overnight process. "Younger companies with advanced architectures and less legacy equipment have an advantage since they are already utilizing new technology and are up to speed on new technology," said Pete Lindstrom, vice president of security research with IDC's IT Executive Program. Legacy infrastructure is an obstacle companies face when trying to shift to a zero-trust approach. A common yet misguided course of action is to conduct a massive overhaul of security infrastructure. "Companies often make the mistake of trying to boil the ocean and go way too broad in scope," Cunningham said. "They should focus in on granular things they can achieve one at a time, like enabling multifactor authentication, remote access control and disabling file shares." Since zero-trust security is a hot buzzword, businesses should be wary in terms of how they evaluate potential vendors since many like to pitch their products as zero trust when they really aren't. "Rule No. 1: Companies should make sure the vendor is using zero trust [in its own network] so they are buying something from someone who understand their pains," Cunningham said.


.NET CLI Templates in Visual Studio

One of the values of using tools for development is the productivity they provide in helping start projects, bootstrapping dependencies, etc. One way that we’ve seen developers and companies deliver these bootstrapping efforts is via templates. Templates serve as a useful tool to start projects and add items to existing projects for .NET developers. Visual Studio has had templates for a long time and .NET Core’s command-line interface (CLI) has also had the ability to install templates and use them via `dotnet new` commands. However, if you were an author of a template and wanted to have it available in the CLI as well as Visual Studio you had to do extra work to enable the set of manifest files and installers to make them visible in both places. We’ve seen template authors navigate to ensuring one works better and that sometimes leaves the other without visibility. We wanted to change that. Starting in Visual Studio 16.8 Preview 2 we’ve enabled a preview feature that you can turn on that enables all templates that are installed via CLI to now show as options in Visual Studio as well. 


How to predict new consumer behaviour in the Covid-19 era

Keeping tabs on what consumers are buying is the easiest way to get your data – predicting which products will grow and which won’t is where the gold is. While some product changes will be obvious — it’s unsurprising that purchase of medical supplies and non-perishable foodstuffs has increased — a 652% rise in the purchase of bread machines suggests that we don’t quite have the skills of Paul Hollywood just yet. There is also insight to be had in observing the products which have decreased in popularity over lockdown. Camera sales reduced by 64% over the previous 4 months. As social events such as holidays, birthdays and weddings were cancelled, so was the need to bag a new ‘social accessory’ for the occasion. Think about how your product suite fits around these trends and whether these trends are short term reactions, or long term shifts in behaviour. Can you scale back on a certain line of products or diversify your range to meet a new product demand? A shift to working — and playing — from home has driven significant demand for new purchases. With 43% of adults now working from home, companies that can help transform our homes into multipurpose activity hubs are rising in popularity.


How to make complicated machine learning developer problems easier to solve

Many of the difficulties in building efficient AI companies happen when facing long-tailed distributions of data….It's becoming clear that long-tailed distributions are also extremely common in machine learning, reflecting the state of the real world and typical data collection practices…. Current ML techniques are not well equipped to handle [long-tail distributions of data]. Supervised learning models tend to perform well on common inputs (i.e. the head of the distribution) but struggle where examples are sparse (the tail). Since the tail often makes up the majority of all inputs, ML developers end up in a loop--seemingly infinite, at times--collecting new data and retraining to account for edge cases. And ignoring the tail can be equally painful, resulting in missed customer opportunities, poor economics, and/or frustrated users. Unfortunately, the answer isn't to throw more computational horsepower or data at the problem. The very problem of disparate data across diverse customer inputs contributes to diseconomies of scale, whereby it may cost 10X more (in terms of data, infrastructure, and more) to generate a 2X improvement.



Quote for the day:

“Our greatest glory is not in never failing, but in rising up every time we fail.” -- Ralph Waldo Emerson 

Daily Tech Digest - September 02, 2020

Building a viable IT budget for 2021 in a time of uncertainty: Seven critical steps

In 2021, IT budget spends will be diversified over a broader range of categories (digitalization, mobile computing, employee training, for example) than in 2020, when IT budgets were heavily invested in security and cloud services. Security and cloud services will still lead investment categories, but organizations have reached an inflection point and feel they have attained many of their initial goals in these areas. End users will continue to be engaged in technology decision making. However, there are indications that more organizations want to fully understand just how much they spend on IT across the company. From a budgetary standpoint, this has sparked a movement to consolidate more of the IT spend (and assets) under a single umbrella, with IT in charge. Also in 2021, CFOs and other technology budget decision-makers will expect more input from successful trials and proofs of concept before they agree to fund new technology. This is in response to the mixed performance of ROI formulas, and also to cost overruns, which have routinely occurred with cloud services. That's not all. Below are seven additional budget forecasts that IT budget planners should take into account before building a 2021 IT budget.


Improvements in native code interop in .NET 5.0

With .NET 5 scheduled to be released later this year, we thought it would be a good time to discuss some of the interop updates that went into the release and point out some items we are considering for the future. As we start thinking about what comes next, we are looking for developers and consumers of any interop solutions to discuss their experiences. We are looking for feedback about interop scenarios in general – not just those related to .NET. If you have worked in the interop space, we’d love to hear from you on our GitHub issue. Some items mentioned in this post are Windows-specific (COM and WinRT). In those cases, ‘the runtime’ refers only to CoreCLR. ... C# function pointers will be coming to C# 9.0, enabling the declaration of function pointers to both managed and unmanaged functions. The runtime had some work to support and complement the interop-related parts of the feature. ... C# function pointers provide a performant way to call native functions from C#. It makes sense for the runtime to provide a symmetrical solution for calling managed functions from native code. UnmanagedCallersOnlyAttribute indicates that a function will be called only from native code, allowing the runtime to reduce the cost of calling the managed function.


Ducati Motors to leverage IT transformation from Aruba and Lenovo

“Using the latest and most advanced technologies is part of Ducati’s DNA,” said Konstantin Kostenarov, chief technology officer at Ducati. “Relying on the best technologies made available through our partners has significantly contributed to the overall improvement of processes, while at the same time increasing the value of the results achieved. “The choices made two years ago and the projects that have been carried out since then have allowed us to tackle the various complexities of this sport in the most effective way possible.” Giorgio Girelli, general manager of Aruba Enterprise, commented: “Among the technologies that have emerged as a result of Covid-19, the cloud is undoubtedly one that has proven its worth and made it possible to better face crisis situations. “An internal commissioned survey reveals that 59% of those who were able to use cloud solutions during emergency situations considered its use to be fundamental to their operations. “The sharing and combination of the latest technologies between the three companies involved has given life to a very innovative project focused on one goal: obtaining maximum performance.”


Leveraging AI to Deliver a Personalized Experience in the New Normal

It is key to understand how different subscribers perceive different experiences while gaming, attending a smart venue or traveling virtually. Each of these experiences will vary for different individuals: e.g. a man in his 30s who works from home versus a teenager who moves around the city. These experiences need to be predicted across various touch points, such as OTT game apps or smart venues, the network, call center, retail, and billing. It is also crucial to proactively identify anomalies and factors contributing to a negative experience or positive experience in order to act fast to resolve issues before they impact gaming customers, or to target the right customers at the optimal time for an add-on purchase in a smart venue. The application of AI and ML brings intelligent insights that are more precise than those produced by existing processes and systems, and enables the CSP to predict changes or anomalies in their customers’ experiences. AI and ML enable the possibility to look at each subscriber based on their individual profile, including demographics, device used or mobility to predict the experience more accurately, and taking into account the individual sensitivities, biases and expectations. The insights software learns with changing dynamics either at the CSPs network, customer segment or market and adapts predictions accordingly.


To build responsibly, tech needs to do more than just hire chief ethics officers

Just like the early days of digital, ethics can seem complex and remote. Remember thinking, “The internet will never be big enough to disrupt my industry? It can be tempting to assume you need a Ph.D. to debate complex topics like algorithmic bias or exclusion, especially as many of those chief ethics officers have those deep credentials and expertise. Even though tech fancies itself as an industry that welcomes new types of talent and thinking, credentialism is more part of the industry culture than we think – or admit. (If you’re questioning that, just think about how popular it is to put ex-employers in your Twitter biography.) Unless you work on ethics full time or you’re a product VP, it’s easy to feel that you have no say or no role in your company’s commitment to social responsibility, especially if you’re underrepresented at your company or speaking up puts you at risk. Ethical leaders play a powerful central role in coordinating, setting standards and creating incentives, but they wouldn’t want to be the only ones to own this work, either. Responsibility’s a muscle we build and practice. Doing the right thing isn’t a one-off action, but a commitment to values that inform day-to-day behaviors and decisions. So we need to create structures that ensure company values are embedded in roles across the board.


What Is Resilience Engineering?

Resilience engineering today isn’t thought of as a function. However, just as DevOps was a description of culture before it was a role and site reliability was an extension of operations before it was a focus, I wouldn’t be surprised if resilience engineering became a function in the new future. The first question most will ask however is, “Isn’t this just SRE?” The purpose of the term is to change the focus from simply reacting to incidents to developing long-term response strategies for them. Because the expectation in these environments is that things will break, resilience is the responsibility of existing DevOps and cloud operations teams. When applications and services do break, a “fly by the seat of your pants” response strategy will not work. Resilience engineering, while rooted in engineering practices, is largely focused on building strategies and a framework for their execution. This leaves the process of building resilience into a largely unestablished system in part because each system is unique. And, how you respond to issues in that system will likely be unique, even if the management plane that reports issues is not. ... For most, the best part of resilience engineering is taking what is learned from previous incidents and finding ways to automate future resolution.


Sustainability Through a Better 5G

Ericsson talks about ‘breaking the energy curve’ by providing products and solutions that simply use less energy and are the practical choice for companies striving to make a sustainable shift in their digital transformation journey. Swapping old radio equipment with 5G-ready Ericsson Radio System equipment nationwide enables service providers to serve 5G use cases with a single software upgrade and can also save them up to 30 percent on their energy consumption . For some operators these savings equate to paying back the investment made on modernization within just three years – who says sustainability does not go together with business goals? Looking to the future of work and travel post-coronavirus, it’s clear that our global mindset has shifted and that we can’t just go back to the way things were before. It’s all about connectivity, especially during these challenging times where keeping in touch with loved ones, essential services and businesses is more important than ever. The next era will witness technology not only serving our needs to stay connected but also enabling a more inclusive and sustainable world. With a focus on real-time data built upon a framework of sustainability, Ericsson have successfully architected a 5G-aware traffic management solution with AI embedded in its RAN Compute software.


Working from home: The 12 new rules for getting it right

Remote working doesn't change some elements of corporate professionalism. "Don't expect that colleagues, clients, and managers should always be easygoing in terms of dress code, tone of voice and punctuality in the remote workplace," Herman Tse, professor in the department of management at Monash Business School, tells ZDNet. And although there is a screen now separating you from your colleagues, don't take this as an opportunity to prudently check emails or scroll Twitter during a video call, because others can tell when you are multi-tasking, even if virtually. You wouldn't check your phone in front of a co-worker giving an in-person presentation – so there is no reason to act differently online. With 30-minute slots being the default option when setting up a calendar meeting, calls that could take a couple of minutes now last for much longer than necessary. "There is work that needs to be done around calendar norms," Sowmyanarayan adds. "Things that take two minutes should take two minutes." Before setting up a day full of half-hour meetings, therefore, remember how long those chats would have taken in an office. More often than not, you will find that a shorter call is far more appropriate.


App Trimming in .NET 5

Trimming sounds great, but as with most good things, there is a catch. The trimming does a static analysis of the code and therefore can only identify types and members when they are referenced from code. However .NET offers a great deal of dynamism, typically depending on reflection. For example, Dependency Injection in ASP.NET Core uses reflection to select appropriate constructors. This is largely transparent to the static analysis, so it needs to either be told about the required types or be able to detect common dynamism patterns – otherwise it will trim away code that is needed by the application which will result in runtime crashes. ... .NET 5 can take it two levels further and remove types and members that are not used. This can have a big effect where only a small subset of an assembly is used – for example, the console application above. Member-level trimming has more risk than assembly level trimming, and so is being released as an experimental feature, that is not yet ready for mainstream adoption. With assembly level trimming, its more obvious when a required assembly is missing, with member level trimming you need to have exhaustive testing of the app to ensure that nothing has been trimmed that could be required.


Q&A: CTO tips on delivering cloud innovation to avoid disruption

Make sure to develop and leverage an internal requirements matrix of what you are looking for. Be very clear about what you want and need from a particular cloud solution. Stack rank key priorities and progressively implement towards the long-term vision. Ask any vendor: How are things audited? Do they comply with privacy regulations such as GDPR? What technical support do they offer? Get a full picture of what the commitment is by the vendor. Deployments that are measured in quarters are too slow, companies need to think about how they can take advantage of the speed and control of cloud deployments and use an agile approach to incrementally transform. An important element to consider is the vendor’s application user rate and the holistic usability of any cloud applications. One of the most important things is usability and adaptability. Will this be easily adaptable to fit your company’s needs? Look at their roadmap and past innovations to get a better sense of their ability to push on innovation and support the ever-changing needs of various businesses. This will give you a better sense of their ability to adapt to the changing needs of your company. Start a dialogue with vendors about how you need to demonstrate results quickly.



Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer

Daily Tech Digest - September 01, 2020

UK government unveils next steps in digital identity plans

The Digital Identity Strategy Board’s six principles: Privacy – When personal data is accessed, people will have confidence that there are measures in place to ensure their confidentiality and privacy; for instance, a supermarket checking a shopper’s age, a lawyer overseeing the sale of a house, or someone applying to take out a loan; Transparency – When an individual’s identity data is accessed when using digital identity products, they must be able to understand by who, why and when; for example, being able to see how your bank uses your data through digital identity solutions; Inclusivity – People who want or need a digital identity should be able to obtain one; Interoperability – Setting technical and operating standards for use across the UK’s economy to enable international and domestic interoperability; Proportionality – User needs and other considerations, such as privacy and security, will be balanced so that digital identity can be used with confidence across the economy; and Good governance – Digital identity standards will be linked to government policy and law. Any future regulation will be clear, coherent and align with the government’s wider strategic approach to digital regulation.


Iranian Hackers Using LinkedIn, WhatsApp to Target Victims

By personalizing the campaign and using these social media platforms, the attackers attempt to gain the victims' trust and coax them into opening the malicious links embedded in follow-up emails, according to the report. Charming Kitten, also known as APT35, Phosphorous and Ajax, is one of Iran's top state-sponsored hacking groups. While the group's tactic of impersonating journalists is not new, ClearSky researchers say the latest campaigns are the first time the threat actors used mediums other than email or SMS to target their victims. "This is the first time we identified an attack by Charming Kitten conducted through WhatsApp and LinkedIn, including attempts to conduct phone calls between the victim and the Iranian hackers," the researchers note in the report. "These two platforms enable the attacker to reach the victim easily, spending minimum time in creating the fictitious social media profile. However, in this campaign, Charming Kitten has used a reliable, well-developed LinkedIn account to support their email spear-phishing attacks." ... Charming Kitten has been targeting journalists and activists since at least 2013.


Dealing with sovereign data in the cloud

Data sovereignty is more of a legal issue than a technical one. The idea is that data is subject to the laws of the nation where it’s collected and exists. Laws vary from country to country, but the most common governance you’ll see is not allowing some types of data to leave the country at any time. Other regulations enforce encryption and how the data is handled and by whom. These were pretty easy rules to follow when we had dedicated data centers in each country, but the use of public clouds that have regions and points-of-presence all over the world complicates things. Misconfigurations, lack of understanding, and just general screw-ups lead to fines, impacts to reputations, and, in some cases, disallowing the use of cloud computing altogether.  Some best practices are emerging to deal with data sovereignty in the cloud. Data governance systems are worth their weight in gold. When dealing with regulations that are bound to data, these systems will keep you out of trouble since they won’t allow humans to violate data policies that are set to reflect the law of the land where the data resides. Training is another critical point. Most of the data sovereignty issues can be traced to human error. Everyone handing the data should be knowledgeable on the regulations. Many countries mandate this.


How IoT is helping cities become more sustainable than ever before

Sensor-enabled devices have been helping to monitor the environmental impact of cities for some time, collecting details about sewers, air quality, and garbage. Recently, air pollution has been a big pain point in cities, such as London, Paris and Rome, where it is regularly cited as one of the most serious environment problem which could affect health today. To address this, many are turning to Air Quality Eggs (AQEs), which are open-source IoT platforms for air pollution. In simple terms, this is an open system that collates citizen-contributed data on air quality. ... Connected technologies are also helping to increase awareness and visibility into individual energy and resource usage. Smart energy meters provide city livers with transparent data on their own energy consumption, which has been shown to reduce consumption across the board. Today, connected smart thermostats can also be used to integrate with heating systems so that clear cut decisions can be made on when to turn the heating on based on fluctuating energy costs. Moreover, smart IoT water management sensors can be combined with data analytics programmes to provide consumers with increased visibility into the amount of water they use.


Overcoming the challenges of machine learning at scale

As with any emerging technology, another challenge is ensuring a positive return on investment with respect to business objectives. Success requires adjustments to both process and culture. “Organizations that are serious about scaling machine learning and bringing more models from the lab to production are investing in the processes, tools, and skills to support model management and operations,” said Isaac Sacolick, President of StarCIO and author of Driving Digital. “Organizations should start with high-value and easy-to-execute experiments, but then must recognize that scaling requires an investment in an end-to-end machine learning lifecycle.” Tim Crawford , CIO Strategic Advisor with AVOA, also emphasized the importance of process and culture. “First step, create a methodology and culture that supports ML and prioritizes how to engage ML,” he said. “Identifying the right projects, prioritizing, ensuring that you have enough good data and creating a culture that embraces ML across the enterprise.” A lack of alignment between ML projects and the business can hobble efforts to scale the technology, said Will Kelly, a technical writer.


Remote Work Has Law Firm Cybersecurity in a Fragile State

For even the most vigilant staff, homes are never going to be quite like offices. It’s too easy for someone to overhear sensitive information, and too much to expect that no one will ever use a personal email, chat tool or social media account to offer something that resembles legal advice. There are so many variables that can no longer be controlled. One firm has gone so far as to insist its lawyers switch off any smart device when on calls to certain clients lest an app listen in. Other firms have decided that certain apps should be banned altogether. Ropes & Gray banned its lawyers and staff from having social media app TikTok on devices that also receive work emails following privacy concerns from clients. And these are just the threats that have been discovered. Research by cybersecurity firm Tessian found that data loss incidents happen way more often than IT directors think. No wonder such people are constantly telling workers to take this stuff more seriously. Unfortunately, it is probably fair to say that there is only one thing that will really make people pay proper attention to their home working habits. And that is a major data breach hitting the headlines.


Is Covid-19 a Mental Health Tipping Point?

As more people remain at home in fear of COVID-19, it’s clear that the future of care is becoming increasingly digital. Even private insurers are stepping up, with most expanding their telehealth coverage, sometimes with no co-pay. This has been a windfall for digital behavioral health startups. Venture funding for this technology has reached unprecedented levels, with a record $588M raised during the first half of 2020 spurred by the pandemic. It’s clear that things will never be the same…and, in some ways, that’s a good thing. This shift has forced many companies to have difficult discussions about staff mental health and wellbeing that had previously been avoided. This new openness is helping employees feel more comfortable in acknowledging how they’re feeling – making it okay not to feel “okay.” This makes the role of managers more complicated and, more impactful than ever before. Yet, some may feel reticent to share their own feelings and/or be unable to manage what can easily become an emotionally charged discussion. And, at the same time, they may be suffering too. It is essential that companies ensure they have the training and support they need to, in turn, support their teams.


Underbanked households would benefit from a regulated blockchain

To be clear, distributed ledger technology is not a panacea, but its core attributes reinforce and strengthen essential controls required by regulators. First, the immutability of the ledger prevents participants within a network from changing or tampering with transactions once it has been recorded. Second, since the technology is decentralized, it provides greater transparency and decreases risk of important information being concentrated within one group or organization. Third, the encrypted nature of blockchain strengthens data privacy and security while enabling secure data-sharing between counterparties, including with regulators and law enforcement when necessary. Many financial institutions remain reluctant to incorporate blockchain tools into their payments or compliance operations. Skepticism from industry, regulators and policymakers has further dampened interest. Yet, essential financial products and services are increasingly being facilitated outside of the traditional banking system, often at a faster pace. Many of these new tools are accessible across borders, beyond a particular regulatory jurisdiction.


Cisco: Making remote users feel at home on the new enterprise network

“The fundamental shift is that we need to think about our people working from home, and the home networks they use, as the default network. What we want is to create a high-quality micro-branch office in your home,” said Greg Dorai, vice president of product management and strategy for Cisco’s Enterprise Infrastructure and Solutions Group. “Now we must consider every work-from-home worker and every one of their home offices as worthy of the same level of connectivity support as our company headquarters and branches.” Realistically every company cannot provide every worker with headquarters-level support for their home networks, but there are technologies available and coming in the near future that can address the different needs of different workers, Dorai said. In Cisco’s case a couple of new offerings address wireless and wide area networking connectivity for remote users. “For employees for whom best-effort connectivity isn’t enough, we can replace or augment their home-networking access point with a Wi-Fi router that acts as an extension of the corporate network,” Dorai said. “Home wireless access points, configured by company IT before the employee installs them, can provide advanced security and monitoring and prioritize bandwidth for applications that need it.”


Interview with RavenDB Founder Oren Eini

RavenDB works with JSON documents, so using JavaScript is a very natural way to work with the database. There are a few ways that you can work with JavaScript in RavenDB. RavenDB has a JS interpreter built-in (supporting ECMAScript 5.1 and large parts of 6) which can be used in queries and in patch operations. That gives you a lot of freedom to express what you want and apply logic on the database server. ... There are a few things that are on our roadmap that I am really looking forward to. For example, in RavenDB 5.1 we are going to come with replication support in Byzantine networks. This is useful when you have RavenDB nodes deployed in an environment where you don’t trust the remote nodes. A good example is when you need to integrate with a RavenDB instance that is running on a user’s machine, and you want to allow that user’s RavenDB instance access to some of the data in the cloud. That allows you to build systems that use RavenDB and collaborate, without needing to trust the remote locations. And conversely, the remote location doesn’t need to trust you. This will allow RavenDB to take on itself the role of synchronization between these locations.



Quote for the same:

"Remember teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni