Daily Tech Digest - August 04, 2020

Ethical AI in healthcare

In many ways these technologies are going to be shaping us even before we've answered this question. We'll wake up one morning and realize that we have been shaped. But maybe there is an opportunity for each of us, in our own settings and in conversations with our colleagues and at the dinner table, and with society, more broadly, to ask the question, What are we really working toward? What would we be willing to give up in order to realize the benefits? And can we build some consensus around that?  How can we, on the one hand, take advantage of the benefits of AI-enabled technologies and on the other, ensure that we're continuing to care? What would that world look like? How can we maintain the reason why we came into medicine in the first place, because we care about people, how can we ensure that we don't inadvertently lose that?  The optimistic view is that, by virtue of freeing up time by moving some tasks off of clinicians’ desks, and moving the clinician away from the screen, maybe we can create space, and sustain space for caring. The hope that is often articulated is that AI will free up time, potentially, for what really matters most. That's the aspiration. But the question we need to ask ourselves is, What would be the enabling conditions for that to be realized?


Apache Cassandra’s road to the cloud

What makes the goal of open sourcing cloud-native extensions to Cassandra is emergence of Kubernetes and related technologies. The fact that all of these technologies are open source and that Kubernetes has become the de facto standard for container orchestration has made it thinkable for herds of cats to converge, at least around a common API. And enterprises embracing the cloud has created demand for something to happen, now. A cloud-native special interest group has formed within the Apache Cassandra community and is still at the early stages of scoping out the task; this is not part of the official Apache project. at least yet. Of course, the Apache Cassandra community had to get its own house in order first. As Steven J. Vaughan-Nichols recounted in his exhaustive post, Apache Cassandra 4.0 is quite definitive, not only in its feature-completeness, but also in the thoroughness with which it has fleshed out the bugs to make it production-ready. Unlike previous dot zero versions, when Cassandra 4.0 goes GA, it will be production-ready. The 4.0 release hardens the platform with faster data streaming, not only to boost replication performance between clusters, but make failover more robust. But 4.0 stopped short about anything to do with Kubernetes.


From doorbells to nuclear reactors: why focus on IoT security

An important step in network security for IoT is identifying the company’s most essential activities and putting protections around them. For manufacturing companies, the production line is the key process. Essential machinery must be segmented from other parts of the company’s internet network such as marketing, sales and accounting. For most companies, just five to 10% of operations are critical. Segmenting these assets is vital for protecting strategic operations from attacks. One of the greatest risks of the connected world is that something quite trivial, such as a cheap IoT sensor embedded in a doorbell or a fish tank, could end up having a huge impact on a business if it gets into the wrong communication flow and becomes an entry point for a cyber attack. To address these risks, segmentation should be at the heart of every company’s connected strategy. That means defining the purpose of every device and object linked to a network and setting boundaries, so it only connects to parts of the network that help it serve that purpose. With 5G, a system known as Network Slicing helps create segmentation. Network Slicing separates mobile data into different streams. Each stream is isolated from the next, so watching video could occur on a separate stream to a voice connection.


The ABCs of Data Science Algorithms

An organization’s raw data is the cornerstone of any data science strategy. Companies who have previously invested in big data often benefit from a more flexible cloud or hybrid IT infrastructure that is ready to deliver on the promise of predictive models for better decision making. Big data is the invaluable foundation of a truly data-driven enterprise. In order to deploy AI solutions, companies should consider building a data lake -- a centralized repository that allows a business to store structured and unstructured data on a large scale -- before embarking on a digital transformation roadmap. To understand the fundamental importance of a solid infrastructure, let’s compare data to oil. In this scenario, data science serves as the refinery that turns raw data into valuable information for business. Other technologies -- business intelligence dashboards and reporting tools -- benefit from big data, but data science is the key to unleashing its true value. AI and machine learning algorithms reveal correlations and dependencies in business processes that would otherwise remain hidden in the organization’s collection of raw data. Ultimately, this actionable insight is like refined oil: It is the fuel that drives innovation, optimizing resources to make the business more efficient and profitable.


Soon, your brain will be connected to a computer. Can we stop hackers breaking in?

Some of the potential threats to BCIs will be carry-overs from other tech systems. Malware could cause problems with acquiring data from the brain, as well as sending signals from the device back to the cortex, either by altering or exfiltrating the data. Man-in-the-middle attacks could also be recast for BCIs: attackers could either intercept the data being gathered from the headset and replace it with their own, or intercept the data being used to stimulate the user's brain and replace it with an alternative. Hackers could use methods like these to get BCI users to inadvertently give up sensitive information, or gather enough data to mimic the neural activity needed to log into work or personal accounts. Other threats to BCI security will be unique to brain-computer interfaces. Researchers have identified malicious external stimuli as one of the most potentially damaging attacks that could be used on BCIs: feeding in specially crafted stimuli to affect either the users or the BCI itself to try to get out certain information, showing users images to gather their reactions to them, for example. Other similar attacks could be carried out to hijack users' BCI systems, by feeding in fake versions of the neural inputs causing them to take unintended actions – potentially turning BCIs into bots, for example.


SaaS : The Dirty Secret No Tech Company Talks About

The priority is to protect employees and ensure business continuity. To achieve this, it is essential to continue adapting the IT infrastructure needed for massive remote working and to continue the deployment of the collaborative digital systems. Beyond these new challenges, the increased risks related to cybersecurity and the maintenance of IT assets, particularly the application base, require vigilance. After responding to the emergency, the project portfolio and the technological agenda must be rethought. This may involve postponing or freezing projects that do not create short-term value in the new context. Conversely, it is necessary to strengthen transformation efforts capable of increasing agility and resilience, in terms of cybersecurity, advanced data analysis tools, planning, or even optimisation of the supply chain. value. The third major line of action in this crucial period of transition is to tighten human resources management, focusing on the large-scale deployment of agile methods, the development of sensitive expertise such as data science, artificial intelligence or cybersecurity. The war for talent will re-emerge in force when the recovery comes, and it is therefore important to strengthen the attractiveness of the company.


Digital Strategy In A Time Of Crisis

The priority is to protect employees and ensure business continuity. To achieve this, it is essential to continue adapting the IT infrastructure needed for massive remote working and to continue the deployment of the collaborative digital systems. Beyond these new challenges, the increased risks related to cybersecurity and the maintenance of IT assets, particularly the application base, require vigilance. After responding to the emergency, the project portfolio and the technological agenda must be rethought. This may involve postponing or freezing projects that do not create short-term value in the new context. Conversely, it is necessary to strengthen transformation efforts capable of increasing agility and resilience, in terms of cybersecurity, advanced data analysis tools, planning, or even optimisation of the supply chain. value. The third major line of action in this crucial period of transition is to tighten human resources management, focusing on the large-scale deployment of agile methods, the development of sensitive expertise such as data science, artificial intelligence or cybersecurity. The war for talent will re-emerge in force when the recovery comes, and it is therefore important to strengthen the attractiveness of the company.


Why ISO 56000 Innovation Management matters to CIOs

The ISO 56000 series presents a new framework for innovation, laying out the fundamentals, structures and support that ISO leaders say is needed within an enterprise to create and sustain innovation. More specifically, the series provides guidance for organizations to understand and respond to changing conditions, to pursue new opportunities and to apply the knowledge and creativity of people within the organization and in collaboration with external interested parties, said Alice de Casanove, chairwoman of the ISO 56000 standard series and innovation director at Airbus. ISO, which started work on these standards in 2013, started publishing its guidelines last year. The ISO 56002 guide for Innovation management system and ISO 56003 Tools and methods for innovation partnership were published in 2019. ISO released its Innovation management -- Fundamentals and vocabulary in February 2020. Four additional parts of the series are forthcoming. The committee developed the innovation standards so that they'd be applicable to organizations of all types and sizes, de Casanove said. "All leaders want to move from serendipity to a structured approach to innovation management," she explained


How plans to automate coding could mean big changes ahead

Known as a "code similarity system", the principle that underpins MISIM is not new: technologies that try to determine whether a piece of code is similar to another one already exist, and are widely used by developers to gain insights from other existing programs. Facebook, for instance, uses a code recommendation system called Aroma, which, much like auto-text, recommends extensions for a snippet of code already written by engineers – based on the assumption that programmers often write code that is similar to that which has already been written. But most existing systems focus on how code is written in order to establish similarities with other programs. MISIM, on the other hand, looks at what a snippet of code intends to do, regardless of the way it is designed. This means that even if different languages, data structures and algorithms are used to perform the same computation, MISIM can still establish similarity. The tool uses a new technology called context-aware semantic structure (CASS), which lets MISIM interpret code at a higher level – not just a program's structure, but also its intent. When it is presented with code, the algorithm translates it in a form that represents what the software does, rather than how it is written; MISIM then compares the outcome it has found for the code to that of millions of other programs taken from online repositories.


RPA bots: Messy tech that might upend the software business

Where it gets interesting is that these RPA bots are basically building the infrastructure for all the other pieces to fit together such as AI, CRM, ERP and even documents. They believe in the long-heralded walled-garden approach in which enterprises choose one best-of-breed infrastructure platform like Salesforce, SAP or Oracle and build everything on top of that. History has shown that messy sometimes makes more sense. The internet did not develop from something clean and organized -- it flourished on top of TCP: a messy, inefficient and bloated protocol. Indeed, back in the early days of the internet, telecom engineers were working on an organized protocol stack called open systems interconnection that was engineered to be highly efficient. But then TCP came along as the inelegant alternative that happened to work and, more important, made it possible to add new devices that no one had planned on in the beginning. Automation Anywhere's CTO Prince Kohli said other kinds of messy technologies have followed the same path. After TCP, HTTP came along to provide a lingua franca for building web pages. Then, web developers started using it to connect applications using JavaScript object notation. 



Quote for the day:

"You have to have your heart in the business and the business in your heart." -- An Wang

Daily Tech Digest - August 03, 2020

When hybrid multicloud has technical advantages

Big companies can’t turn their ships fast enough, and the CIO must consider setting priorities — based on business impact and time to value — for application modernization. As Keith Townsend, co-founder of The CTO Advisor, put it on Twitter, “Will moving all of my Oracle apps to Amazon RDS net business value vs. using that talent to create new apps for different business initiatives? The problem is today, these are the same resources.” Then ask software developers, and you’ll find many prefer building applications that deploy to public clouds, and that leverage serverless architectures. They can automate application deployment with CI/CD, configure the infrastructure with IaC, and leave the low-level infrastructure support to the public cloud vendor and other cloud-native managed service providers. And will your organization be able to standardize on a single public cloud? Probably not. Acquisitions may bring in different public clouds than your standards, and many commercial applications run only on specific public clouds. Chances are, your organization is going to be multicloud even if it tries hard to avoid it. In the discussion below, we’ll examine a number of scenarios in which a hybrid cloud architecture offers technical advantages over private cloud only or multiple public clouds.


Layering domains and microservices using API Gateways

Bounded contexts are the philosophical building blocks of microservice architectures. If we want to layer our architecture, we need to layer our concepts. And as you might imagine, this is not difficult at all! We have the entire organization’s structure to be inspired, and since domain driven systems tie in very closely with how organizations are organized, there is plenty of opportunity to copy-paste. Our organization’s structure clearly tells us that a “domain” can mean very different things at different levels of abstractions. As soon as we say “abstraction”, we know that we are in a hierarchical world. If you have ever seen a junior developer try to explain a production outage to a senior manager, you know what I am talking about. The minutiae of system implementation don’t matter to the senior manager because at his level of operation, “outage due to timeout in calling payment authentication service from checkout validator service” is interpreted as “outage in checkout due to payment system”. He doesn’t care about “timeout”, “authentication”, “validator” or “service” – he cares about “checkout”, “outage”, and “payment”. The CEO doesn’t even care about “checkout” and “payment”, he probably just hears “tech” and “outage”.


5G: What Does The Future Hold? 

According to David Hardman, until 5G networks cover a much greater area of the UK, major initiatives around the technology are likely to focus more on what is possible in the future, rather than necessarily providing solutions today. “It’s a chicken and egg situation,” he says. “New, innovative products and services need to be developed in parallel with infrastructure roll-out in order to take full commercial advantage. Businesses coming through the 5G incubator, 5PRING, in the early days are likely to be larger established businesses that can plug in to what 5G currently offers. Full implementation will enable real commercial returns for these organisations, with the next wave of innovation then coming from new businesses that establish themselves when the 5G service is fully up and running.” “Although Huawei concerns and covid-19 have impacted progress, the pandemic has also woken people up to what true digital communication is about. Can you imagine what the working world would have looked like if the virus had struck 15 years ago, when none of the remote working technology was readily available? If we look forward another 10 years, the development of 5G will bring a further evolutionary step-jump in what digital has to offer in all aspects of our lives.” 


To lead in the postcrisis tomorrow, put leadership and capabilities in place today

By and large, the great remote-working experiment brought on by the crisis has shown that a lot can be accomplished, immediately and virtually, with small teams, fewer and streamlined cycles, and without so much time expended on travel. As one executive noted when talking about his company’s meetings budget: “The problem isn’t where we are doing the meeting or why, but why did we have to convene two dozen people to all get together to make the decision … instead of just three people on a disciplined conference call.” Why are we talking about speed in a discussion about investing in an organization’s capabilities? Because without equally addressing speed, an organization’s progress innovating and adapting merely grinds along. Often, counterintuitively, it may be necessary to put in some “good bureaucracy.” During the crisis, some companies have traded in traditional videoconferencing, replete with large numbers of contributors, in favor of “wartime councils” in which multiple senior stakeholders gather once to act rapidly as decision makers. Using something as simple as a two-page document, teams can cut straight to the heart of a business issue and get to yes or no quickly, often with better results. Such exercises are worth retaining and propagating.


5G unmade: The UK’s Huawei reversal splits the global telecom supply chain

It would be the quintessential catalyst for market fragmentation. This was the argument being made by every telco, every equipment producer, and every telecom analyst with whom we spoke two years ago, without exception, back when the world seemed more cooperative, and globalization was a good thing.  "What we don't want," explained Enders' James Barford, "is a situation where Huawei and ZTE work in China, Ericsson and Nokia work in the rest of the world, Samsung does a bit here and there. Ultimately, telecoms companies everywhere have reduced choice, and at the basic standards level, suppliers aren't working together. The best ideas aren't winning through. At the moment, if one of Ericsson, Huawei, and Nokia have a good idea, the others have to follow. . . to keep up. But we don't want to be in a situation where one of them has an innovation, and the rest of the world just kind of carries on. If Huawei has an innovation that makes China better, the rest of the world just misses out." Dr. Lewis chuckled a bit at this scenario. His assertion is that technology standards at all levels, but especially technology, become global by nature. Yes, countries may seek to assert some type of sovereign control over Internet traffic — besides China, Germany and Russia have also staked claims to digital sovereignty, and observers do perceive this as a global trend.


A Perfect Storm: The “New Normal” in Business and CCPA Compliance Enforcement

Privacy compliance, while something we have to do as the CCPA starts active enforcement July 1, is not just a “one and done” task—you need to scale out your privacy program to stay ahead of each new mandate and adapt to today’s evolving landscape, whether COVID or the next major unpredictable event. Simply burying one’s head in the sand in apathy has very costly consequences. Let’s have a look at data subject reporting: While data protection is a critical aspect of avoiding a data breach or misuse, there is also a real cost in handling data subject rights requests from your loyal customers as the CCPA begins enforcement and the GDPR continues on. And this requires transparency into data access and use across your organization. A major industry analyst firm points out in a survey last year on the GDPR that this activity can represent a potential outlay of $1,406 per request to handle inquiries manually, on a case-by-case basis. Without an automated approach to privacy compliance, the costs to manage data subject requests at scale can quickly overwhelm unprepared organizations. And to do that, you’ll need to take advantage of automation and AI to find customer data across your organization and report on its use, or risk privacy regulatory violations with fines and brand reputation at stake.


Is a lack of governance hindering your cloud workload protection?

As operations in the cloud grow together with the teams managing them, company-wide visibility and accountability become critical issues. After all, you can’t accurately detect, stop or respond to something if you can’t see it. In this way, workload events need to be captured, analysed and stored so that security teams can enjoy the visibility they need to detect and stop threats in real-time, as well as to hunt down and investigate threats. Accountability is a critical concern for information security in cloud computing, representing most importantly the trust in service relationships between clients and cloud providers (Microsoft Azure et al). Indeed, without evidence of accountability, a lack of trust and confidence in cloud computing can raise it’s head among those concerned with managing the business. Sensitive data (PII) is processed in the cloud and governance is critical to make sure that such data is always processed and stored in a secure manner. Data protection is big news these days – especially more so with the advent of both PII and General Privacy Data Regulation (GDPR) data compliance regulations. The shared responsibility model between the cloud platform provider you choose and your organisation, means that you (the organisation) remain responsible for the protection and security of any sensitive data from your end customers.


Not seeing automation success? Think like a pilot

Figuring out what processes need to be automated is one thing. Managing them from then on is a whole new ball game – and one that will require constant attention. After all, autopilot only kicks in once the plane is successfully cruising. Process mining technologies help you analyse and discover processes using your business’ data, but process intelligence goes several steps further. This offers the deep understanding and real-time monitoring of your processes that many businesses are missing. Then, it can drill down into the granular details, explain why processes don’t work and how to fix them, and give you the tools to solve problems you didn’t even know existed. It’s vital that business leaders check in on their processes often during this phase, to see where issues lie, which processes are most problematic, and which are ripe for automation. Once this is in good shape, you can move on to intelligent automation – combining process intelligence with automation like RPA. This is the switch to autopilot. Here, the technology can spot potential issues with processes like bottlenecks or delays before they happen, and update bots with corrective actions to fix the failing process. 


What are script-based attacks and what can be done to prevent them?

The use of scripts poses many advantages to the attacker: scripts are easy to write and execute, trivial to obfuscate, and extremely polymorphic. Moreover, attackers can use many types of script files to carry out an attack – the most popular being PowerShell, JavaScript, HTA, VBA, VBS, and batch scripts. Since fileless attacks occur in memory, traditional static file detection is rendered useless. Furthermore, scripts complicate post-event analysis since many artifacts related to the attack only exist in the computer’s memory and may be overwritten or removed through a reboot, for example. In-memory detection and artifact collection are possible through the use of heuristics and behavioral analysis, which can detect malicious in-memory activities. Script-based attacks run on virtually all Windows systems, increasing the potential attack surface and the chance of infection. One major drawback of script-based attacks is that, unless deployed via an exploit, user interaction is required for the script to run. For example, in most cases, the script is contained either as a script file within an email requiring user action or as a VBA macro in a document that requires the user to enable macros.


The Illusion of Digital Transformation in Banking

There are strong indications that leadership lacks experience in implementing such massive transformations. This has resulted in a prioritization of technology being purchased that may only scratch the surface of needed transformation. For instance, purchasing a new mobile banking platform is only as good as the underlying processes that also must be changed to improve the overall digital banking customer experience. It also appears that the current financial strength of the industry is resulting in complacency around making large, overarching changes to what has long been the operating norm in banking. But the challenges don’t end there. On the not-too-distant horizon, banks and credit unions will need to address a digital skills shortage and the internal culture shift requisite to facilitate needed innovation and transformation. ... The organizations with the greatest digital transformation maturity tend to be upgrading the most number of digital technologies. In most cases, the prioritization is determined by a mix of business requirements, cost, ease (or difficulty) of transformation, and skills available either internally or through partners. Organizations with the highest digital transformation maturity have also made progress on implementing the more sophisticated technologies. These include artificial intelligence (AI), robotic process automation (RPA), cloud computing, the Internet of Things (IoT), and blockchain solutions.



Quote for the day:

"Open Leadership: the act of engaging others to influence and execute a coordinated and harmonious conclusion." -- Dan Pontefract

Daily Tech Digest - August 02, 2020

Test Automation Best Practices

Designing tests and test data is the most crucial and time-consuming portion of the testing process. To be valid, test design must be precise in indicating the software functionalities to be tested. During the design phase, test conditions are identified based on specified test requirements, effective test modules and metrics are developed, and the anticipated behavior that will yield valid results is determined. Automated testing performs evaluations against manual test requirements to verify the reliability of the automated process. The use of an automation framework to configure testing modules characterizes automated testing. The automated framework supports the development of automated test scripts while it also monitors and maintains test results and related documentation. The structural framework for an automated test suite is the structural foundation of automated testing. Automation best focuses on identified priority factors for deployment. Manual testing can precede automated testing to contribute test conditions and data that test automation can use for regression and other types of testing.


Winning in Digital Innovation: Turning Scale and Legacy into Strengths

Over the past few years, disruptive forces have hit industry after industry. Travel has been disrupted by Priceline, Expedia, TripAdvisor, and Airbnb, transportation by Uber, and retail by Amazon and Alibaba. For established businesses, the most disruptive threats tend to come from outside traditional competition. New companies not only spot opportunities to create value that many incumbents fail to see, they also tend to operate with different business models. In fact, it’s no longer about having a level playing field. The disruptors are playing an entirely new game. Google is a master of this new game, converting an array of industries into advertising revenue. Amazon is another serial disruptor with its Amazon Prime now in a two-horse race with Netflix— undermining the model of traditional broadcast industries. Even those that have not yet been significantly impacted by these forces are not safe. Over the next five years, 40 percent of companies will face some form of digital disruption, according to Forbes magazine. Artificial intelligence is beginning to attack knowledge-based industries previously seen as safe from disruption, thanks in large part to companies such as Google and Amazon offering “AI on tap.”


How Payments Fintech Is Using Banking As A Service To Drive Growth

There are two core challenges that Banking as a Service helps an international payments company overcome. The first is the need for a regulated entity to be involved when it comes to offering many core banking type services such as checking accounts or savings and lending products. The second is that the technology requirements and capabilities to offer these products such as maintaining account ledgers for customer accounts are very different to those of core payments services. Obtaining the necessary regulatory licenses and building the technology can be two of the most expensive cost items for a financial services company. Banking as a Service exists to reduce both the time and cost spent Fintechs spend on these two items allowing to focus on their core businesses.  And for cross-border payments companies or Fintechs with international ambitions, a whole additional level of complexity comes by adding a geographic dimension. Regulations and technologies are very different country to country worldwide which means more time and more cost. We spoke with the CEOs and senior management of various Banking as a Service companies in the UK and US to understand what is driving the growth in Banking as a Service.


Here’s why IT departments need predictive analytics

AI-based detection platforms are capable of monitoring IT systems in real-time, checking for early signs of potential failures. To take one example, my company Appnomic has managed to handle 250,000 severe IT incidents for our clients with AI, which equals more than 850,000 man-hours of work. By harnessing machine learning, such platforms can use past data to learn how problems typically develop, enabling a company to step in before anything unfortunate occurs. In 2017, Gartner coined the term “artificial intelligence systems for IT operations” (AIOps) to describe this kind of AI-driven predictive analysis, and the market research firm believes that the use of AIOps will grow considerably over the next few years. In 2018, only 5 percent of large enterprises are using AIOps, but the firm estimates that by 2023 this figure is set to rise to 30 percent. This growth will be driven by the fact that several benefits come from the application of machine learning and data science to IT systems. Aside from detecting likely problems before they occur, AI can significantly reduce false alarms, in that it can gain a more reliable grasp of what actually leads to failures than previous technologies and human operators.


The Garmin Hack Was a Warning

Recent victims include not just Garmin but Travelex, an international currency exchange company, which ransomware hackers successfully hit on New Year’s Eve last year. Cloud service provider Blackbaud—relatively low-profile, but a $3.1 billion market cap—disclosed that it paid a ransom to prevent customer data from leaking after an attack in May. And those are just the cases that go public. “There are certainly rather large organizations that you are not hearing about who have been impacted,” says Kimberly Goody, senior manager of analysis at security firm FireEye. “Maybe you don’t hear about that because they choose to pay or because it doesn’t necessarily impact consumers in a way it would be obvious something is wrong.” Bigger companies make attractive ransomware targets for self-evident reasons. “They’re well-insured and can afford to pay a lot more than your little local grocery store,” says Brett Callow, a threat analyst at antivirus company Emsisoft. But ransomware attackers are also opportunistic, and a poorly secured health care system or city—neither of which can tolerate prolonged downtime—has long offered better odds for a payday than corporations that can afford to lock things down.


Facebook’s newest proof-of-concept VR headset looks like a pair of sunglasses

The proof-of-concept glasses aren’t just thin for looks, though — they also apparently beam images to your eyes in a way that’s different than standard VR headsets on the market today. I’ll let Facebook’s research team explain one of those techniques, called “holographic optics:” Most VR displays share a common viewing optic: a simple refractive lens composed of a thick, curved piece or glass or plastic. We propose replacing this bulky element with holographic optics. You may be familiar with holographic images seen at a science museum or on your credit card, which appear to be three-dimensional with realistic depth in or out of the page. Like these holographic images, our holographic optics are a recording of the interaction of laser light with objects, but in this case the object is a lens rather than a 3D scene. The result is a dramatic reduction in thickness and weight: The holographic optic bends light like a lens but looks like a thin, transparent sticker. The proof-of-concept headset also uses a technique Facebook calls “polarization-based optical folding” to help reduce the amount of space between the actual display and the lens that focuses the image.


Regulatory Uncertainty Greatest Problem For Blockchain Entrepreneurs, Says Producer

A regulatory environment characterized by widespread uncertainty is the single biggest challenge facing entrepreneurs in the digital currency and blockchain industry, according to J.D. Seraphine, who produced the docuseries “Open Source Money.” ... The U.S. government has had an overall uneven approach to regulating digital currencies and blockchain. It is a fairly new and complex technology so part of that is attributed to a learning curve for regulators and government officials. There are also multiple agencies who have claimed jurisdiction over the regulation of digital assets each classifying them differently, making it very difficult for companies to know how to operate in this industry in the U.S. The industry needs clear regulations and rules or for the government to step back completely like they did with the early days of the internet. I believe this gray area of uncertainty is the worst thing for entrepreneurs and companies attempting to operate here, and it has led to other countries moving ahead of the U.S. in pioneering what many are calling the most important technology since the creation of the internet.


Black Hat Virtually: An Important Time to Come Together as a Community

What concerns me the most about the moment we're in right now is that the bad actors are getting more sophisticated by the day. The simple attacks don't work as often anymore. I've seen this script numerous times in the course of my career when I look at the work our research teams publish. What worked six months ago may not work now. The only way we can fight back against a more sophisticated opponent is through knowledge-sharing and collective protection, both formal and informal. I'm grateful that the Black Hat community is there to swap war stories of how we've succeeded — and failed — against adversaries. Those conversations, even digitally, will make the difference. Cybersecurity is a team sport. The conversations that the cybersecurity community will have at this year's Black Hat (and at the subsequent DEF CON) will be instrumental in shaping how we all respond going forward as the world has changed. It's our responsibility, as a security community, to take this digital conference just as seriously as we would take an in-person one. 


Does Ethical use of AI is the only way forward!

Companies across the world are spending a lot of time and money in AI. The experts are doing a lot of research to Java develop high quality and extremely useful AI-based tools. AI is surely quite popular and soon, it will turn out to be quite popular. But, do you know why and how it should be used mostly? Are we only looking at the ethical uses of AI? Is anyone trying to make something nontechnical using AI as well? Sometimes, Artificial Intelligence is considered a bit overhyped. Although, it is not. And, we have been reading about some dangers of AI as well in the recent past. However, AI has mostly turned out to be useful for humans, but, the fact that AI will be mimicking human intelligence, thus, there is some bit of risk involved as well. Though AI is most useful, it can only be considered not very useful only when humans find it difficult to understand how to use it and make the most of it. Also, the intentions of the people who are using have to be good. AI itself is not harmful, but the users have to make sure that AI tools are used rightly. Artificial Intelligence causes a bit of worry for humans too.


Enterprise architecture heats up to meet changing needs

Skills is definitely one of the biggest challenges at the moment. Most people are making the decision to expand their EA, or start an EA if they haven't had one, and they just move in people from one box to another. Just because you can code software doesn't mean you can think like an architect. If you are a systems engineer, you know the processes and systems, but it doesn't mean you can do capability modeling and things like that. When it comes to tools, one of the biggest barriers to EAs moving forward is ROI. The reason it's hard to come up with an ROI is because people don't do activity-based accounting. They don't identify how long they spend doing all of their tasks. If they had that information, they could say, 'I can save this amount of money if I automate these things.' The other big barrier is that people on the business side are now tech-savvy, and they question the need for EA. They don't want EAs telling them to use certain technology. A lot of the business [leaders] are now thinking, 'IT is just a cost center. I want [IT] to be an order taker.' 




Quote for the day:

"And no heart has ever suffered when it goes in search of its dream." -- Paulo Coelho

Daily Tech Digest - August 01, 2020

Top 6 cybersecurity trends to watch for at Black Hat USA 2020

Tom Kellermann, head of cybersecurity strategy atVMware Carbon Black, said, "Black Hat USA 2020 will highlight the dramatic surge and increased sophistication of cyberattacks amid COVID-19. A recent VMware Carbon Black report found that from the beginning of February to the end of April 2020, attacks targeting the financial sector have grown by 238%. Cybercriminals are also preying on the virtual workforce, the mass shift to remote work has sparked increasingly punitive attacks. Malicious actors have set their sights on commandeering digital transformation efforts to attack the customers of organization. These burglaries have escalated to a home invasion, with destructive attacks exploding to a 102% increase with the use of NOTPetya style ransomware and wipers. Spear phishing is no longer the primary attack vector, rather OS vulnerabilities, application exploitation, RDP open to the internet, and island hopping have risen to the top." Code42 CISO and CIO Jadee Hanson, said, "Top of mind for me is how the mental and emotional wellbeing of our workforce during the pandemic is impacting people's work and behavior and, as a result, their risk profiles. Businesses need to have a strong pulse on how their employees are doing. 


Metadata Repository Basics: From Database to Data Architecture

While knowledge graphs have shown potential for the metadata repository to find relationship patterns among large amounts of information, some businesses want more from a metadata repository. Streaming data ingested into databases from social media and IoT sensors, also need to be described. According to a New Stack survey of 800 professionals developers, real-time data use has seen a significant increase. What does this mean for the metadata repository? Enterprises want metadata to show the who, what, why, when, and how of their data. The centralized metadata repository database answers these questions but remains too slow and cumbersome to handle large amounts of light-speed metadata. Knowledge graphs have the advantage of dealing with lots of data and quickly. However, knowledge graphs display only specific types of patterns in their metadata repository. Companies need another metadata repository tool. Here comes the data catalog, a metadata repository informing consumers what data lives in data systems and the context of this data. Automation and discovery make the data catalog attractive by ensuring it keeps up with fast-moving data and its changes. Business and technical users can easily query the data catalog.


Confidential Computing Will Revolutionize The Internet Of Things

Confidential computing is all about trust. Developers in this field are seeking to accelerate the adoption of what is known as  “Trusted Execution Environment” (TEE) technologies. A TEE sequesters code and data away from applications on the main operating system in order to keep them away from adversaries who may gain access to the main operating system. Or, to use an analogy from this article, if the main system is in the White House, for instance, with a variety of protections, a TEE is the bunker underneath it. Within any of these bunkers, only those entities authorized by the actual data owner can view or alter the data. This enables all sorts of applications to operate efficiently without ever needing to have direct access to data. This goes beyond the better-known technique of anonymizing data, which just removes personal identifiers from a database. While anonymization protects privacy, it limits the usefulness of the data, whereas confidential computing secures data even as it is in use, allowing for wider application. Confidential computing protects encrypted software code and data from malicious administrators and hackers in public clouds; protects sensitive machine-learning models and enables privacy-preserving data analytics;


Technical Challenges of IoT Cybersecurity in a Post-COVID-19 World

For manufacturers of connected IoT products, it is key to focus on their supply chain and increase the ability to break their products down into their respective components. Effective management of vulnerabilities can be done only when information about supply chain dependencies is accurate and recent. A second side effect of the pandemic is the massively increased reliance on cloud-based communication systems. It is unthinkable to conduct business effectively and in compliance with the current legal restrictions without holding a videoconference, sharing a document, or presenting a slide set remotely. The systems used to perform those tasks, however, are largely following the same basic principles that typical client-server architectures have been following for roughly 20 years. While the cryptographic transport protocols have improved significantly since SSLv2, there still is a disparity in the level of trust between client and server: Clients are typically considered entirely untrusted while servers hold all the secrets and relay data securely. While this is easiest for the implementors of backend infrastructure, such a design is something which is fundamentally unpleasant from a security point of view.


Industrial robots could 'eat metal' to power themselves

The researchers' vision for a "metal-air scavenger" could solve one of the quandaries of future IoT-enabled factories. That quandary is how to power a device that moves without adding mass and weight, as one does by adding bulky batteries. The answer, according to the University of Pennsylvania researchers, is to try to electromechanically forage for energy from the metal surfaces that a robot or IoT device traverses, thus converting material garnered, using a chemical reaction, into power. "Robots and electronics [would] extract energy from large volumes of energy dense material without having to carry the material on-board," the researchers say in a paper they've published in ACS Energy Letters. It would be like "eating metal, breaking down its chemical bonds for energy like humans do with food." Batteries work by repeatedly breaking and creating chemical bonds. The research references the dichotomy between computing and power storage. Computing is well suited to miniaturization, and processers have been progressively reduced in size while performance has increased, but battery storage hasn't.


IoT Automation Trend Rides Next Wave of Machine Learning, Big Data

In particular, automated discovery of IoT environments for cybersecurity purposes has been an ongoing driver of IoT automation. That is simply because there is too much machine information to manually track, according to Lerry Wilson, senior director for innovation and digital ecosystems at Splunk. The target is anomalies found in data stream patterns. “Anomalous behavior starts to trickle into the environment, and there’s too much for humans to do,” Wilson said. And, while much of this still requires a human somewhere “in the loop,” the role of automation continues to grow. Wilson said Splunk, which focuses on integrating a breadth of machine data, has worked with partners to ensure incoming data can now kick off useful functions in real time. These kinds of efforts are central to emerging information technology/operations technology (IT/OT) integration. This, along with machine learning (ML), promises increased automation of business workflows. “Today, we and our partners are creating machine learning that will automatically set up a work order – people don’t have to [manually] enter that anymore,” he said


Putting AI And Machine Learning To Work In Cloud-Based BI And Analytics

By leveraging modern technology to automate data lake migration and replication to the cloud with WANdisco LiveData Cloud Services through its patented Distributed Coordination Engine platform. This innovation is founded on fundamental IP which is based around forming consensus in a distributed network. This is an extremely hard problem to solve and to this day some people believe that it cannot be solved. So what is this problem at a high level? If you have a network of nodes, distributed across the world with little to no knowledge of the distance and bandwidth between the nodes, how can you get the nodes to coordinate between each other without worrying about any failure scenarios? The solution is the application of a consensus algorithm and the gold standard in consensus is an algorithm called Paxos. Our chief Scientist Dr. Yeturu Aahlad, an expert in distributed systems, devised the first, and even now only, commercialised version of Paxos. By doing so, he solved a problem that had been puzzling computer scientists for years. WANdisco’s LiveData Cloud Services are based on this core IP including our products focused on analytical data and the challenge of migrating this data to the cloud and keeping the data consistent in multiple locations.


Breach of high-profile Twitter accounts caused by phone spear phishing attack

Whatever specific spear phishing method was used in the breach, clearly the attackers relied on a combination of technical skills and social engineering know-how to be able to convince employees into sharing their account credentials. Of course, that's the M.O. for many phishing attacks and other types of malicious campaigns. "This attack relied on a significant and concerted attempt to mislead certain employees and exploit human vulnerabilities to gain access to our internal systems." Twitter acknowledged. "This was a striking reminder of how important each person on our team is in protecting our service." Other than training employees through phishing simulations and similar methods, trying to correct human behavior is always challenging. That's why socially engineered attacks are often successful. "This incident demonstrates that social engineering is still a common method for attackers to gain access to internal systems," Ray Kelly, principal security engineer at WhiteHat Security, told TechRepublic. "The human is often times the weakest link in any security chain.


Meeting the Demand: Containerizing Master Data Management in the Cloud

Containerizing MDM as a PaaS offering is essential to realizing the flexibility for which the cloud is renowned. Although this capability becomes redoubled with Dockers or Kubernetes orchestration platforms, containers themselves “reduce the disruption of the architecture of the platform and provide more portability and flexibility for customers,” Melcher remarked. “What a container really is is kind of a preconfigured application, if you will.” These lightweight repositories include everything to deploy apps. Without them, MDM as a native PaaS offering increases the propensity for vendor lock-in per cloud provider, and all but eliminates on-premise hybrid clouds. The speed and ease of containerizing MDM services lets customers “spin the platform up in a matter of minutes without downloading installation and configuration guides, spinning up a Windows server, or loading up a bunch of pre-requisites,” Melcher mentioned. “All of that sort of tribal type knowledge that customers have had to historically take on when they buy an application goes away.”


The Role of Augmented Data Management in the Workplace

The negative aspects of the current situation with COVID-19 and the associated economic downturn have presented a shift in the workplace, which is driving more opportunities for growth, greater visibility into the B2B buying process, and ensuring quality customer experience throughout the buying cycle, according to Rashmi Vittal, CMO at Foster City, Calif.-based Conversica. “This change introduces intelligent automation into the workplace, something we refer to as the Augmented Workforce,” she said. An Augmented Workforce describes a workplace where business professionals work alongside artificial intelligence to drive better business outcomes. One such AI-driven technology making dramatic changes for customer-facing teams, including sales, marketing, and customer success, is an intelligent virtual assistant (IVA). IVAs in turn accelerate revenue across the customer journey by autonomously engaging contacts, prospects, customers, or partners in human-like, two-way interactions at scale, to drive towards the next best action.



Quote for the day:

"It is amazing what you can accomplish if you do not care who gets the credit." -- Harry S. Truman

Daily Tech Digest - July 31, 2020

5 Must-Have Skills For Remote Work

When teams work remotely, at least half of all communication is done via writing rather than speaking. This means communicating through emails, Slack, or texting. It even applies to using the chat function while you’re on a video call. You need to be able to communicate clearly no matter what platform you’re using. ... Working remotely doesn’t mean working alone. You’re still going to be part of a team, which means working with colleagues on projects and tasks. Without a physical space to gather, collaboration can be a bit more challenging. Communication skills and collaboration skills go hand in hand, as communication plays a huge role in successful collaboration. Find the right balance of video meetings, phone calls, and messages to ensure ample but not overwhelming communication. ... You might be working with colleagues who are in a different time zone which impacts deadlines, when meetings can be scheduled, and even when you can get in touch with those colleagues. If you’re assigned to work with a new team, you might have to adapt to the way that team works.


How to secure your project with one of the world’s top open source tools

Dynamic application security testing (DAST) is a highly effective way to find certain types of vulnerabilities, like cross site scripting (XSS) and SQL injection (SQLi). However many of the commercial DAST tools are expensive to use and often only used when a project is getting ready to ship, if they are used at all. ZAP can be integrated into a project’s CI/CD pipeline from the start, ensuring that many common vulnerabilities are detected and can be fixed very early on in the project lifecycle. Testing in development also means that you can avoid the need to handle tools and features designed to make automation difficult, like single sign-on (SSO) and web application firewalls (WAFs). ... For web applications, or any projects that provide a web based interface, you can use ZAP or another DAST tool. But don’t forget to use static application security testing (SAST) tools as well. These are particularly useful if they are introduced when starting a project. If SAST tools are used against more mature projects then they often flag a large number of potential issues, which makes it difficult to focus on the most critical ones.


Using the Attack Cycle to Up Your Security Game

Attack sophistication is directly proportional to the goals of the attackers and the defensive posture of the target. A ransomware ring will target the least-well-defended and the most likely to pay (ironically, cyber insurance can create a perverse incentive in some situations.) because there is an opportunity cost and return on investment calculation for every attack. A nation-state actor seeking breakthrough biotech intellectual property will be patient and well-capitalized, developing new zero-day exploits as they launch a concerted effort to penetrate a network's secrets.  One of the most famous of these attacks, Stuxnet, exploited vulnerabilities in SCADA systems to cripple Iran's nuclear program. The attack was thought to have penetrated the air gap network via infected USB thumb drives. As awareness of these complex, multi-stage attacks has risen, startups have increased innovation - such as the behavior analytics space where complex machine-learning algorithms determine "normal" behaviors and look for that one bad actor. Threat actors are the individuals and organizations engaged in the actual attack. In the broadest sense of the term, they are not always malicious. 


The FI and fintech opportunity with open banking

What’s different now is that over the last two or three years the industry has come together to collaborate on evolving the ecosystem. One example is the formation of an industry group called the Financial Data Exchange. As a result, financial institutions, financial data aggregators, and related parties are developing standards for access, authentication, and transparency that will provide end-to-end governance to keep the ecosystem safe and fair, and consumer data secure. ... “Banks are looking for technology innovation to address both back office challenges, get faster and leaner, reduce costs, but also to increase engagement with their customers,” Costello says. “Certainly at times like this we see how important digital engagement is.” As some FIs are closing branches to reduce costs, digital engagement becomes essential. And if it’s done right, it works. And the opportunity for innovation abounds. The better multi-factor authentication and authorization that comes with open banking means that the bank has a higher degree of confidence that the person with whom they’re engaging is the account holder. Now that they have a higher degree of trust, they can offer a higher degree of engagement.


Reduced cost, responsive apps from micro front-end architecture

Early micro front-end projects have focused on how to provide better separation of logic and UI elements into smaller, more dynamic components. But modern micro front ends have moved far beyond the idea of loose coupling code to full scale Kubernetes-based deployment. There's even been a recent trend of micro front ends containerized as microservices and delivered directly to the client. For example, the H2 app by Glofox recently adopted this approach to implement a PaaS for health and fitness apps, which gyms and health clubs then customize and provide to clients. The app uses the edgeSDK from Mimik Technology Inc., to manage the containerized micro front-end microservices deployment to run natively across iOS, Android and Windows devices. In addition, a micro front-end deployment reduces the server load. It only consumes client-side resources, which improves response times in apps vulnerable to latency issues. Users once had to connect to databases or remote servers for most functions, but a micro front end greatly reduces that dependency. 


8 Tips for Crafting Ransomware Defenses and Responses

For any attack that involves ransomware, the fallout can be much more extensive than simply dealing with the malware. And organizations that don't quickly see the big picture will struggle to recover as quickly and cost-effectively as they might otherwise be able to do (see: Ransomware + Exfiltration + Leaks = Data Breach). That's why understanding not just what ransomware attackers did inside a network, but what they might still be capable of doing - inside the network, as well as by leaking - is an essential part of any incident response plan, security experts say. So too is identifying how intruders got in - or might still get in - and ensure those weaknesses cannot be exploited again, says Alan Brill, senior managing director in Kroll's cyber risk practice. "If you don't lock it down, it's very simple: You're still vulnerable," he tells Information Security Media Group. "If you lock down what you thought was the issue but you were wrong - it wasn't the issue - that they weren't just putting ransomware in your system but they've been in there for a month examining your system, exfiltrating data and lining up how to do the most damage when they launched the ransomware, you may not even know what happened."


We've forgotten the most important thing about AI. It's time to remember it again

Leufer has just put the final touches to a new project to debunk common AI myths, which he has been working on since he received his Mozilla fellowship – an award designed for web activists and technology policy experts. And one of the most pervasive of those myths is that AI systems can and act of their own accord, without supervision from humans. It certainly doesn't help that artificial intelligence is often associated with humanoid robots, suggesting that the technology can match human brains. An AI system deployed, say, to automate insurance claims, is very unlikely to come in the form of a human-looking robot, and yet that is often the portrayal that is made of the technology, regardless of its application.  Leufer calls those "inappropriate robots", often shown carrying out human tasks that would never be necessary for an automaton. Among the most common offenders feature robots typing on keyboards and robots wearing headphones or using laptops. The powers we ascribe to AI as a result even have legal ramifications: there is an ongoing debate about whether an AI system should own intellectual property, or whether automatons should be granted citizenship. 


Scaling Distributed Teams by Drawing Parallels from Distributed Systems

The biggest bottleneck for any distributed team is decision-making. Similar to distributed systems, if we apply “deliver accountability and receive autonomy,” the bottleneck is removed eventually. For this to happen, there should be a lot of transparency and information sharing. So the teams and individuals are enabled to make decisions independently. Clarity is harder with a distributed team. Distributed systems send heartbeats very frequently and detailed reports at a lesser frequency. Communication is the key. Distributed standups are a better way of determining progress. Apart from that, move one-to-one conversations and decision-making to a common channel. We tried a concept called the end of the day update. Everyone posts their progress at the end of their day (considering different time zones). We believe it gives a better view of what each person is working on and the overall progress, even before they come to standups. At EverestEngineering, the coaches are responsible for improving the health of the channel. A healthy distributed team has a lot of discussions on slack channels and quick calls. You can see a lot of decisions made in the channel. There are enough reactions and threads for a question.


How to build a quantum workforce

The growth means companies are looking to hire applicants for quantum computing jobs and that the country needs to build a quantum workforce. Efforts are underway; earlier this month, more than 5,000 students around the world applied to IBM's Qiskit Global Summer School for future quantum software developers. And the National Science Foundation and White House Office of Science and Technology Policy held a workshop in March designed to identify essential concepts to help students engage with quantum information science (QIS). But industry experts speaking on the topic during an IBM virtual roundtable Wednesday said K-12 students are not being prepared to go to schools with the requisite curriculum to work in this industry. Academia and industry must work in tandem to engage the broadest number of students to get them prepared to do these kinds of jobs that will be needed in the future, said Jeffrey Hammond, vice president and principal analyst at Forrester Research, who moderated the discussion. It was only four years ago that quantum computing became available in the cloud, giving more people access, noted panelist Abe Asfaw, global lead of quantum education at IBM Quantum.


A Developer-Centric Approach to Modern Edge Data Management

A substantial majority of embedded developers in the IoT and complex instrumentation space use C, C++, or C# to handle data processing and local analytics. That’s in part because of how easy it is to handle direct I/O for devices and internal systems components as well as more complex digitally-enhanced machinery through some variations of inp() and outp() statements. It’s also easy to manipulate collected data using familiar file system statements such as fopen(), fclose(), fread(), and fwrite(). This is the path of least resistance. Almost anyone who takes a programming class (or just takes the time to learn how) can use these statements to interact with data at the file system level. The problem is that file systems are very simple. They don’t do much by themselves. When it comes down to document and record management, indexing, sorting, creating and managing tables, and so on, there’s only one operative statement: DoItYourself(). And we’re not even talking about rare or rocket science-level activities, here. These are everyday activities that that you’d find in any database system. Wait! It’s the D-word! May as well be the increment of the ASCII character pointer by two to the … you know what word.



Quote for the day:

"If you have no confidence in self, you are twice defeated in the race of life." -- Marcus Garvey

Daily Tech Digest - July 30, 2020

The Challenges of Building a Reliable Real-Time Event-Driven Ecosystem

Building a dependable event-driven architecture is by no means an easy feat. There is an entire array of engineering challenges you will have to face and decisions you will have to make. Among them, protocol fragmentation and choosing the right subscription model (client-initiated or server-initiated) for your specific use case are some of the most pressing things you need to consider. While traditional REST APIs all use HTTP as the transport and protocol layer, the situation is much more complex when it comes to event-driven APIs. You can choose between multiple different protocols. Options include the simple webhook, the newer WebSub, popular open protocols such as WebSockets, MQTT or SSE, or even streaming protocols, such as Kafka. This diversity can be a double-edged sword—on one hand, you aren’t restricted to only one protocol; on the other hand, you need to select the best one for your use case, which adds an additional layer of engineering complexity. Besides choosing a protocol, you also have to think about subscription models: server-initiated (push-based) or client-initiated (pull-based). Note that some protocols can be used with both models, while some protocols only support one of the two subscription approaches. Of course, this brings even more engineering complexity to the table.


Successful Digital Transformation Requires a Dual-track Approach

This first part of the dual-track approach focuses on the identification and implementation of new digital tech throughout an organization, while also working to change cultures and business workflows impacted by the transformation, according to the report. While this step is critical, it is also complex and time consuming. The benefits may take time to come to fruition, which is why many executives are dissatisfied with current transformation results. Not only are executives impatient, but they don't have the second part of the dual-track to get them by, the report found. The second portion is a parallel track that hones in on areas overlooked in large-scale transformation tactics. These areas include the organization's ability to quickly connect and modernize hundreds of crucial processes that cross both business workflows and work groups, according to the report. This goal can be achieved through rapid-cycle innovation, which encourages business professionals outside of IT to propose and create new apps for updating existing workflow processes, with the goal of achieving quick wins for the company and supporting long-term transformation, the report found.


How deploying new-age technologies has changed the role of leadership amid COVID-19

Circumstances created by a pandemic, such as COVID-19 have been hugely disruptive and could even render organizations paralytic, if they are far removed from any understanding of how technology is an imperative and not optional add on. This is why it is critical to have a proactive mindset to technology, instead of a reactive approach. Proactive investment in technology is helping organizations reap maximum benefits as this approach allows leaders to prepare their people to embrace and become comfortable in using technology, so that it becomes spontaneously embedded in an organization at a fundamental level. The investments we proactively made many years ago, whether in secure virtual platforms or AI driven due diligence processes that help automate how we finalize our contracts, has helped us seamlessly adapt to working with minimum disruption. The biggest asset has been the spontaneous comfort level of our people in adapting to this transformed scenario of working from home, due to their prior high degree of familiarity with using technology platforms and processes at work over the past many few years, ensuring our ability to optimize productivity.


Anatomy of a Breach: Criminal Data Brokers Hit Dave

At the moment, however, some evidence points to ShinyHunters having phished Dave employees. The group has previously advertised - and has been suspected of being behind - the sale of millions of stolen records obtained from Indonesian e-commerce firm Tokopedia, Indian online learning platform Unacademy, Chicago-based meal delivery outfit HomeChef, online printing and photo store ChatBooks, university news site Chronicle.com, as well as Microsoft's private GitHub repositories, according to Baltimore-based security firm ZeroFox. How does ShinyHunters steal so much data? Cyble says that in a post to a hacking forum, a user called "Sheep" says of the Dave breach: "This database was dumped through sending GitHub phishing emails to Dave.com employees. The employees were found by searching for developers in the organization on LinkedIn/Crunchbase/Angel. All of the databases sold by ShinyHunters were obtained through this method. In some cases, [the] same method was used but for GitLab, Slack and Bitbucket."


IoT Security: How to Search for Vulnerable Connected Devices

Researchers offer many tools and ways to search for hacker-friendly IoT devices. The most effective methods have already been tested by botnet creators. In general, the use of certain vulnerabilities by botnets is the most reliable criterion for assessing the level of security of IoT devices and the possibilities of their mass exploitation. Searching for vulnerabilities, some attackers rely on the firmware (in particular, those errors that were discovered during firmware analysis using reverse engineering methods). Other attackers start looking for vulnerabilities searching for the manufacturer’s name. In any case, for a successful search, some kind of a distinctive feature of a vulnerable device is needed, and it would be nice to find several such features. ... There are really many vulnerabilities in IoT devices, but not all of them are easy to exploit. Some vulnerabilities require a physical connection, being near or on the same local network. The use of others is complicated by quick security patches. On the other hand, manufacturers are in no hurry to patch firmware and often admit it. Getting an accurate list of vulnerable IoT devices will require significant efforts, it is not just a one-time query.


Security: This nasty surprise could be waiting for retailers when they open up again

"A lot of retailers, when they come back online, they're going to be focused on business processes and getting employees back to work. They're not necessarily thinking, 'maybe I need to update Windows on my computer terminal', or update POS terminal firmware." In retail, where surges in online transactions during the pandemic have forced retailers to quickly transform their ecommerce capabilities, hackers have shifted their focus to make the most of this opportunity. This includes changing-up well-known types of attacks by using them in different ways, such as exploiting credit cards within a different type of merchant platform, and targeting parts of retailers' systems that might otherwise slip through the cracks. We've already seen new forms of attacks on retailers take place during the pandemic. In late June, researchers at security software firm Malwarebytes identified a new web-skimming attack , whereby cybercriminals concealed malware on ecommerce sites that would steal information typed into the payment input fields, including customers' names, address and card details.


Finland government funds work on potential quantum leap

The Finnish government has allocated €20.7m to the venture, which will be run as an innovation partnership open to international bidding. Closer to home, VTT-TRCF plans to cooperate with Finnish companies across the IT and industrial sphere during the various phases of the project’s implementation and application. The rapid advances in quantum technology and computing have the potential to provide societies with the tools to overcome major future problems and challenges, such as the Covid-19 pandemic, that remain out of the reach of contemporary supercomputers. Quantum technologies have the potential to complete complex calculations, which currently take days to do, orders of magnitude quicker. Making calculations that traditional computers are fundamentally unable to do, if practical, they would mark a leap forward in computing capability far greater than that from the abacus to a modern computer. Antti Vasara, the CEO of VTT-TRCF said: “The quantum computers of the future will be able to accurately model viruses and pharmaceuticals, or design new materials in a way that is impossible with traditional methods.”


What the CCPA means for content security

Simply installing an ECM system will not yield a secure content ecosystem. If there is one thing that all ECM experts agree on, it's installing an ECM system will accomplish nothing aside from consuming resources. People need to use the system to manage content -- and want to use it -- even after setting up the necessary security controls to meet the requirements of the CCPA. Deploying an ECM system that is so secure that people do not want to use it is a waste of resources. The ECM system does not need to be complicated. Setting up a secure desktop sync of content is an important first step in ease of use and adoption. Instead of just rolling it out, companies need to work with each group using the software first. The business must help users organize their content and set up a basic structure for storing content so that the system doesn't become disorganized. Depending on the system that a business is using, setting up a basic structure may include a basic taxonomy, content types, standard metadata or a combination of any of these. If a business implements its ECM system correctly, its largest challenge will be securing mobile devices and laptops. 


How blockchain could play a relevant role in putting Covid-19 behind us

Covid-19 has revealed the weaknesses of global supply chains with countless reports of PPE issues, a lack of food in impoverished areas, and a breakdown of business-as-normal, even in places where demand has remained constant. Trust has always been the keystone of trade. But how can you trust supply chain partners to deliver in times of widespread failure? Owing to its decentralised nature, blockchain-based applications create a transparent ecosystem when you trust — and see — that the mechanisms in place are fair to all. It can provide instant overviews of entire supply chains to highlight issues as soon as they arrive. What’s more, it is possible to implement live failsafes with smart contracts that can ensure the smooth continuation of the supply chain and remove the very need for trust in the first place. To this end, the World Economic Forum developed the Blockchain Deployment Toolkit, a set of high-level guidelines to help companies implement best practices across blockchain projects – especially those helping solve supply chain issues. They worked with more than 100 organisations for more than a year, delving into 40 different blockchain use cases, including traceability and automation, to help guide organisations in their efforts to solve real-world problems with blockchain.


The growing trend of digitization in commercial banking

“Technology has absolutely been at the forefront of all the changes we have seen and will see in upcoming years,” explained Rao. Even so, the business of banking has not changed on a fundamental level. Rather, products have become more commoditized; similar business products are being offered, but customers are using them in different ways. In Rao’s words, “the ‘what’ component has not changed, but the ‘how’ has.” This is where digitization has had the biggest impact. For example, commercial banking capabilities like making a payment or collecting a receivable have long been available for corporate entities. But today, the same capability can be offered in a way that emphasizes a great user experience—something that hasn’t always been a focal area in the commercial banking space. ... Large traditional banks are frequently riddled with outdated legacy systems on the back end of operations, which dilutes their offerings even with modern digital technology at the front end. These legacy systems make it costly to create the ideal customer experience, leading many banks to focus on implementing strategies that pave the path towards modernization. In certain cases, this means opening up and modernizing selective pieces of back-end systems to improve operations overall.



Quote for the day:

"Leadership has a harder job to do than just choose sides. It must bring sides together." --