Daily Tech Digest - February 19, 2021

Data lake storage: Cloud vs on-premise data lakes

The data lake is conceived of as the first place an organisation’s data flows to. It is the repository for all data collected from the organisation’s operations, where it will reside in a more or less raw format. Perhaps there will be some metadata tagging to facilitate searches of data elements, but it is intended that access to data in the data lake will be by specialists such as data scientists and those that develop touchpoints downstream of the lake. Downstream is appropriate because the data lake is seen, like a real lake, as something into which all data sources flow, and they are potentially, many, varied and unprocessed. From the lake, data would go downstream to the data warehouse, which is taken to imply something more processed, packaged and ready for consumption. While the data lake contains multiple stores of data, in formats not easily accessible or readable by the vast majority of employees – unstructured, semi-structured and structured – the data warehouse is made up of structured data in databases to which applications and employees are afforded access. A data mart or hub may allow for data that is even more easily consumed by departments. So, a data lake holds large quantities of data in its original form. Unlike queries to the data warehouse or mart, to interrogate the data lake requires a schema-on-read approach.


Microsoft Azure Front Door Gets a Security Upgrade

Johnson uses three principles to describe zero trust, the first of which involves adopting explicit verification for every transaction during a session: "So not just verifying the human, but the device, the data, the location, if it's an IoT device, the application – everything that happens in the session should be verified and anomalous behavior should be flagged," she explains. The second principle is ensuring least privilege access. Many organizations still provide too much privileged access to employees, Johnson says. One of the steps Microsoft is taking with its content and application delivery is implementing more controls around access. The third principle: "Then, finally, assume you've been breached," she says. Assumed breach is a topic the security industry has discussed for years, but with zero trust, they have to assume they have been breached, and that anything within the organization could potentially be breached. These principles have grown essential as application-delivery networks undergo a massive transformation to the cloud, Johnson explains. The new capabilities in Azure Front Door aim to provide organizations with one platform that meets availability, scalability, and security needs.


Tools And Models Used In Software Testing

Software testing is a significant part of software quality assurance (SQA), it is an activity used for evaluating and improving software quality. It involves a set of activities carried out with the sole aim of finding errors in software. It validates and verifies if the software or product is functioning correctly without any errors or bugs capable of incurring defects. In the testing phase, the errors from previous cycles must be detected, this ensures complete software reliability and quality assurance. With the development of software functionalities, it is essential to use innovative testing models and tools to ensure that time and cost spent on testing is thoroughly minimized. When it comes to testing the functionality of the software, there are two types; manual and automation. Manual testing is carried out by the tester. Informal review, inspection, walkthrough, and technical review are the techniques of manual testing. Manual testing is time-consuming and requires more effort, this is a major issue with this kind of testing. Test Automation helps to completely resolve and control these issues. Automated testing can be categorized into four; performance testing, safety testing, accuracy testing and testing of reliability. Using automation tools, steps involved in manual testing are being automated.


Combining Three Pillars Of Cybersecurity Security

As cybersecurity gaps abound, there has been a growing panic in both industry and government on how to protect the cyber landscape. In the past, three significant risk management themes have been put forward to help ameliorate the digital risk ecosystem including: security by design, defense in depth, and zero trust. They are a triad, or three strong pillars of risk management needed for a successful cybersecurity strategy. Security by Design is really the initiation point of a risk management process—especially if you are a software or hardware developer concerned with security. In an article in United States Cybersecurity magazine, cybersecurity expert Jeff Spivey provided an excellent working definition: “Security by Design ensures that security risk governance and management are monitored, managed and maintained on a continuous basis. The value of this “holistic” approach is that it ensures that new security risks are prioritized, ordered and addressed in a continual manner with continuous feedback and learning.” Defense in Depth. A variety of strong definitions exist for defense in depth in the security community. 


The Future of Team Leadership Is Multimodal

Effective leadership in this new hybrid world requires different skills that go beyond traditional team leadership. Specifically, organizations will need leaders who can operate well across two distinct modes. For much of the time, they will operate in virtual coordination mode. This means establishing goals, monitoring progress, driving information sharing, and sustaining connections among colleagues working remotely. When their teams periodically come together to engage in true collaboration, leaders will need to operate in face-to-face collaboration mode, fostering deep learning, innovation, acculturation, and dedication. The nature and mix of team tasks will dictate the modes in which those teams operate. Tasks that involve working interdependently but without much integration — reporting, performing administrative tasks, making simple decisions, sharing information, drafting documents, and performing financial analyses — will mostly be done virtually. Likewise, our research and experience have shown that most one-on-one interactions between leaders and their reports, including some coaching, can be accomplished effectively through virtual means However, essential tasks that require team members to integrate their knowledge, create safe spaces for dialogue on difficult issues, and form emotional connections cannot be done productively while working virtually.


Unstructured data: the hidden threat in digital business

With the growth of unstructured data comes the unfortunate truth that it’s much more difficult to control and secure than structured data. For example, if an employee is taking information in the form of unstructured data and moving it elsewhere, they may store the original document or picture on a local file share or send it in an email as an attachment. Within one organization, the process for handling documents could vary across employees and teams, and it’s very likely that management has no idea this is happening. Unstructured data doesn’t have to be a forever risk, though. It’s entirely possible for organizations to manage and incorporate it into safe data practices and protocols. For that to happen successfully, business leaders must do the following: First, acknowledge that unsecured unstructured data is a problem within the organization. Add it as an urgent priority for the IT or data security teams to address. Don’t wait until an issue arises or assume that hackers are going to go after larger volumes of what one assumes is more “attractive” data. We’ve learned that hackers are unpredictable and that no organization, no matter the size or scope, is immune to the threat.


How You Can Expedite Your Venture With Machine Learning

With machine learning tools, organizations can figure out gainful opportunities as well as possible risks more promptly. ML aids companies in improving business scalability and enhancing business operations. The rapidly evolving new techniques in the ML field are expanding the usage of machine learning to nearly infinite possibilities. The article focuses on how you can expedite your business growth with the use of machine learning, and here are the key points: Prediction of the market segment: When businesses are entering into the market with a new idea, it is very important to understand and forecast the reactions of the market. If you go with human intelligence for a logical prediction, it would be a huge task to consider all the applicable parameters from a large set of historical data. However, if you make use of the correct classification algorithm(s), you can predict the response from the prospective market segment if it is good, bad, or neutral. Besides, you can use continuous or regression algorithms to predict the size or range. Prediction of customer lifetime value: For marketers, it is quite important to know about the customer lifetime value prediction and customer segmentation. For this, companies use huge amounts of data effectively with the help of ML and data mining to obtain meaningful business insights. 


Manufacturing outlook for 2021 focuses on resilience

The prime driver for the acceleration is the drive to implement e-commerce platforms either for B2B or direct-to-consumer commerce, Yavar said. "Manufacturers are all chasing the KPI thresholds around quality and on-time delivery that Amazon set, so everybody's trying to get as close as possible to that two-day or one-day service," he said. "That's not easily done, so they're scrambling to understand how deploying technology like robotics can speed up the process and strategically align distribution functions, whether it's in-house or external, to cut costs." The increasing importance of the supply chain as a vital business process will spur innovation and bring new players into the market, Yavar explained. "It's akin to the ERP market of the 1990s and early 2000s where there was the traditional 'Big 5,' but then we saw the explosion of players with the advent of cloud. The same thing's happening in the supply chain technology space today," he said. "The barrier to entry to produce the technology and get in the marketplace is much lower than it used to be, so this market will become more and more dynamic over time, there will be consolidation, and new technology and the supply chain will be seen not as a cost center but a differentiator for manufacturers over the next several years."


CIOs Face Decisions on Remote Work for Post-Pandemic Future

The evolution of the global remote work force had its share of growing pains, says Cortney Thompson, CIO with cloud solutions and managed services provider Lunavi. Early on, opportunistic vendors made quick pushes to offer services to companies in dire need to go remote, but he says some stumbled along the way. “A few of those vendors had scaling problems as they brought additional load on,” Thompson says. That made it important to listen to the experiences companies were having with those vendors, he says, and how their performance changed in response. Naturally if organizations did not see the results they wanted, they looked to branch out to other providers in the market, Thompson says. While some vendors took a conservative approach in taking on clients at the onset of the pandemic, he says others focused on grabbing as much of the market as possible without such restraint. In some instances, things broke under pressure, Thompson says. “There were some supply chain issues along the way and there was stress on the system and cracks started to show.” Innovations that found their footing during the pandemic include the zero-trust approach to security, he says, with higher adoption rates. 


Data Security Accountability in an Age of Regular Breaches

When it comes to information security, cyber hygiene is remarkably analogous to biological hygiene. Much like the immune system within an organism, poor digital security hygiene can result in an infection (security incident) progressing into a full-blown compromise (data breach). The expectation is that the breached organization will take active measures to mitigate the effects of the data breach, and it ends there. However, this is not enough. Much like taking precautions against spreading the COVID-19 infection, individuals must play their part in reducing their own levels of digital security contagion. Following any discovered infection resulting from a breach (digital or biological), the best process is to engage in measures to quarantine yourself to reduce the exposure of others. One of the most basic digital hygiene methods simply relies upon the user deploying complex and unique passwords for each service they utilize. While this would be the first port of call when a data breach is discovered, the fact is such a practice is rarely followed, and further explains many of the breaches we've experienced to date. To address this, the general public's attitude toward passwords needs to evolve to that of phone numbers.



Quote for the day:

"Leadership offers an opportunity to make a difference in someone's life, no matter what the project." -- Bill Owens

Daily Tech Digest - February 18, 2021

AI progress depends on us using less data, not more

The data science community’s tendency to aim for data-“insatiable” and compute-draining state-of-the-art models in certain domains (e.g. the NLP domain and its dominant large-scale language models) should serve as a warning sign. OpenAI analyses suggest that the data science community is more efficient at achieving goals that have already been obtained but demonstrate that it requires more compute, by a few orders of magnitude, to reach new dramatic AI achievements. MIT researchers estimated that “three years of algorithmic improvement is equivalent to a 10 times increase in computing power.” Furthermore, creating an adequate AI model that will withstand concept-drifts over time and overcome “underspecification” usually requires multiple rounds of training and tuning, which means even more compute resources. If pushing the AI envelope means consuming even more specialized resources at greater costs, then, yes, the leading tech giants will keep paying the price to stay in the lead, but most academic institutions would find it difficult to take part in this “high risk – high reward” competition. These institutions will most likely either embrace resource-efficient technologies or persue adjacent fields of research.


How to Create a Bulletproof IoT Network to Shield Your Connected Devices

By far, the biggest threat that homeowners face concerning all of their connected devices is the chance that an outsider might gain access to them and use them for nefarious purposes. The recent past is littered with examples of such devices becoming part of sophisticated botnets that end up taking part in massive denial of service attacks. But although you wouldn’t want any of your devices used for such a purpose, the truth is that if it happened, it likely wouldn’t affect you at all (not that I’m advocating that anyone ignore the threat). The average person really should be worried about the chance that a hacker might use the access they gain to a connected device as a jumping-off point to a larger breach of the network. That exact scenario has already played out inside multiple corporate networks, and the same is possible for in-home networks as well. And if it happens, a hacker might gain access to the data stored on every PC, laptop, tablet, and phone connected to the same network as the compromised device. And that’s what the following plan should help to prevent. In any network security strategy, the most important tool available in isolation. That is to say; the goal is to wall off access between the devices on your network so that a single compromised device can’t be used as a means of getting at anywhere else.


How to build a digital mindset to win at digital transformation

First, you need to overcome the technical skills barrier. For that you need the right people. There is a difference in developing hardware or software as much as selling a one-time sales product or a service with recurring fees. Yes, you can train people to a certain extent to do so. But what we’ve realised at Halma is that diversity, equality and inclusion are just as important to digital & innovation success as every other aspect of business performance. At Halma this approach to diversity is in our DNA. Attracting and recruiting people with diverse viewpoints as well as diverse skills, mean that you will be able to see new opportunities and imagine new solutions. Second, you need to overcome the business model barrier. You need to think differently about how your business generates revenue. Fixed mindsets in your team that don’t have an outside-in approach to your market and are hooked on business as usual need to be changed. You need to take a bold and visionary approach to doing business differently, and helping your team reimagine their old business model. Third, you need to overcome the business structure barrier. Often the biggest barrier to cultural adaptation is the organisation itself. Using the same tools and strategies that built your business today isn’t going to enable the digital transformation of tomorrow. It requires a fundamental shift in the way your organisation works.


Tips for boosting the “Sec” part of DevSecOps

“If there’s a thing that, as a security person, you’d call a ‘vulnerability,’ keep that word to yourself and instead speak the language of the developers: it’s a defect,” he pointed out. “Developers are already incentivized to manage defects in code. Allow those existing prioritization and incentivization tools to do their job and you’ll gain the security-positive outcomes that you’re looking for.” ... “Organizations need to stop treating security as some kind of special thing. We used to talk about how security was a non-functional requirement. Turns out that this was a wrong assumption, because security is very much a function of modern software. This means it needs to be included as you would any other requirement and let the normal methods of development defect management take over and do what they already do,” he noted. “There will be some uplift requirements to ensure your development staff understands how to write tests that validate security posture (i.e., a set of tests that exercise your user input validation module), but this is generally not a significant problem as long as you’ve built in the time to do this kind of work by including the security requirements in that set of epics and stories that fit within the team’s sprint budget.”


6 strategies to reduce cybersecurity alert fatigue in your SOC

Machine Learning is at the heart of what makes Azure Sentinel a game-changer in the SOC, especially in terms of alert fatigue reduction. With Azure Sentinel we are focusing on three machine learning pillars: Fusion, Built-in Machine Learning, and “Bring your own machine learning.” Our Fusion technology uses state-of-the-art scalable learning algorithms to correlate millions of lower fidelity anomalous activities into tens of high fidelity incidents. With Fusion, Azure Sentinel can automatically detect multistage attacks by identifying combinations of anomalous behaviors and suspicious activities that are observed at various stages of the kill-chain. On the basis of these discoveries, Azure Sentinel generates incidents that would otherwise be difficult to catch. Secondly, with built-in machine learning, we pair years of experience securing Microsoft and other large enterprises with advanced capabilities around techniques such as transferred learning to bring machine learning to the reach of our customers, allowing them to quickly identify threats that would be difficult to find using traditional methods. Thirdly, for organizations with in-house capabilities to build machine learning models, we allow them to bring those into Azure Sentinel to achieve the same end-goal of alert noise reduction in the SOC.


How To Stand Out As A Data Scientist In 2021

Jack of all trades doesn’t cut it anymore. While data science has many applications, people will pay more bucks if you are an expert at one thing. For instance, your value as a data scientist will be worth its weight in gold if you are exceptional at data visualisations in a particular language rather than a bits and pieces player. The top technical skills in demand in 2021 are data wrangling, machine learning, data visualisation, analytics tools, etc. As a data scientist, it’s imperative to know your fundamentals down cold. It would help if you spent enough time with your data to extract actionable insights. A data scientist should sharpen her skills by exploring, plotting and visualising data as much as possible. Most data scientists or aspiring data scientists doing statistics learn to code or take up a few machine learning or statistics classes. However, it is one thing to code little models on practice platforms and another thing to build a robust machine learning project deployable in the real world. As a rule, data scientists need to learn the fundamentals of software engineering and real-world machine learning tools.


AI startup founders reveal their artificial intelligence trends for 2021

Matthew Hodgson, CEO and founder of Mosaic Smart Data, says AI and automation is “permeating virtually every corner of capital markets.” He believes that this technology will form the keystone of the future of business intelligence for banks and other financial institutions. The capabilities and potential of AI are enormous for our industry. According to Hodgson, recent studies have found that companies not using AI are likely to suffer in terms of revenue. “As the link between AI use and revenue growth continues to strengthen, there can be no doubt that AI will be a driving force for the capital markets in 2021 and in the decade ahead — those firms who are unwilling to embrace it are unlikely to survive,” he continues. Hodgson predicts that with the continued tightening regulatory environment, financial institutions will have to do more with less and many will need to act fast to remain both competitive and relevant in this ‘new normal’. “As a result, we are seeing that financial institutions are increasingly looking to purchase out-of-the-box third-party solutions that can be onboarded within a few short months and that deliver immediate results rather than taking years to build their own systems with the associated risks and vast hidden costs,” he adds.


How Reading Papers Helps You Be a More Effective Data Scientist

In the first pass, I scan the abstract to understand if the paper has what I need. If it does, I skim through the headings to identify the problem statement, methods, and results. In this example, I’m specifically looking for formula on how to calculate the various metrics. I give all papers on my list a first pass (and resist starting on a second pass until I’ve completed the list). In this example, about half of the papers made it to the second pass. In the second pass, I go over each paper again and highlight the relevant sections. This helps me quickly spot important portions when I refer to the paper later. Then, I take notes for each paper. In this example, the notes were mostly around metrics (i.e., methods, formula). If it was a literature review for an application (e.g., recsys, product classification, fraud detection), the notes would focus on the methods, system design, and results. ... In the third pass, I synthesize the common concepts across papers into their own notes. Various papers have their own methods to measure novelty, diversity, serendipity, etc. I consolidate them into a single note and compare their pros and cons. While doing this, I often find gaps in my notes and knowledge and have to revisit the original paper.


Generation Z Is Bringing Dramatic Transformation to the Workforce

While Gen Zers and Millennials are coming into their own in the workforce, Baby Boomers are leaving in droves, taking valuable expertise and experience with them that’s often not documented throughout the organization. Pew Research reports 3.3 million people retired in the third quarter of 2020 -- likely driven by staff reductions and incentivized retirement packages created by the pandemic. The change in rank will inevitably drive how people interact with technology, particularly around the transfer of knowledge to bridge the skills gap. While this transition is still in flux, we’ve already been able to imagine the impact. Coding languages risk becoming extinct, and machinery risks grinding to a halt. Data from recruitment firm Robert Half reveals three quarters of finance directors believe the skills gap created by retiring Baby Boomers will negatively impact their business within 2-5 years. To that point, the COVID pandemic is not only creating turnover in the workforce but is also making in-person knowledge sharing difficult. Technology is helping to soften this challenge, ensuring business resiliency against the “disruption” of retirement. Where practical knowledge handovers are less viable, in the case of remote work or global organizations, programming languages or process-specific knowledge can be taught through artificial intelligence (AI). 


The Theory and Motive Behind Active/Active Multi-Region Architectures

The concept of active/active architectures is not a new one and can in fact be traced back to the 70s when digital database systems were being newly introduced in the public sphere. Now as cloud vendors roll out new services, one of the factors they are abstracting away for users is the set-up of such a system. After all, one of the major promises of moving to the cloud is the abstraction of these types of complexities along with the promise of reliability. Today, an effective active/active multi-region architecture can be built on almost all cloud vendors out there. Considering the ability and maturity of cloud services in the market today, this article will not act as a tutorial on how to build the intended architecture. There are already various workshop guides and talks on the matter. In fact, one of the champions of resilient and high available cloud architectures, Adrian Hornsby who is the Principal Technical Evangelist at AWS, has a great series of blogs guiding the reader through active/active multi-region architectures on AWS. However, what is missing, or at least what has been lost, is the theory and clear understanding of the motive behind implementing such an architecture. 



Quote for the day:

"Expression is saying what you wish to say, Impression is saying what others wish to listen." -- Krishna Sagar

Daily Tech Digest - February 16, 2021

Thought-detection: AI has infiltrated our last bastion of privacy

The research team plans to examine public acceptance and ethical concerns around the use of this technology. Such concerns would not be surprising and conjure up a very Orwellian idea of the ‘thought police’ from 1984. In this novel, the thought police watchers are expert at reading people’s faces to ferret out beliefs unsanctioned by the state, though they never mastered learning exactly what a person was thinking. This is not the only thought technology example on the horizon with dystopian potential. In “Crocodile,” an episode of Netflix’s series Black Mirror, the show portrayed a memory-reading technique used to investigate accidents for insurance purposes. The “corroborator” device used a square node placed on a victim’s temple, then displayed their memories of an event on screen. The investigator says the memories: “may not be totally accurate, and they’re often emotional. But by collecting a range of recollections from yourself and any witnesses, we can help build a corroborative picture.” If this seems farfetched, consider that researchers at Kyoto University in Japan developed a method to “see” inside people’s minds using an fMRI scanner, which detects changes in blood flow in the brain.


How to protect backups from ransomware

Whatever backup solution you choose, copies of backups should be stored in a different location. This means more than simply putting your backup server in a virtual machine in the cloud. If the VM is just as accessible from an electronic perspective as it would be if it were in the data center, it’s just as easy to attack. You need to configure things in such a way that attacks on systems in your data center cannot propagate to your backup systems in the cloud. This can be done in a variety of ways, including firewall rules, changing operating systems and storage protocols. ... If your backup system is writing backups to disk, do your best to make sure they are not accessible via a standard file-system directory. For example, the worst possible place to put your backup data is E:\backups. Ransomware products specifically target directories with names like that and will encrypt your backups. This means that you need to figure out a way to store those backups on disk in such a way that the operating system doesn’t see those backups as files. For example, one of the most common backup configurations is a backup server writing its backup data to a target deduplication array that is mounted to the backup server via server message block (SMB) or network file system (NFS). 


CFOs are becoming catalysts of digital strategy

“The role of the CFO has further evolved beyond serving as the finance lead to becoming a ‘digital steward’ of their organization. Increasingly, CFOs are focused on collecting and interpreting data for key business decisions and enabling strategy beyond the borders of the finance function,” said Christian Campagna, Ph.D., senior managing director and global lead of the CFO & Enterprise Value practice at Accenture. “Faced with new challenges spurred by the pandemic, today’s CFOs must execute their organizations’ strategies at breakthrough speeds to create breakout value and success that can be realized across the enterprise.” The report identifies an elite group (17%) of CFOs who have transformed their roles effectively, resulting in positive changes to their organizations’ top-line growth and bottom-line profitability. ... increasingly, companies are looking to CFOs to spearhead thinking around future operating models and drive the technology agenda forward with a focus on security and ESG. In fact, 68% of surveyed CFOs say that finance takes ultimate responsibility for ESG performance within their enterprise. However, 34% specifically cited concern about data and privacy breaches as a barrier preventing them from realizing their full potential as a driver of strategic change.


Data meets science: Open access, code, datasets, and knowledge graphs for machine learning research and beyond

Reproducibility is a major principle of the scientific method. It means that a result obtained by an experiment or observational study should be achieved again with a high degree of agreement when the study is replicated with the same methodology by different researchers. According to a 2016 Nature survey, more than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. This so-called reproducibility or replication crisis has not left artificial intelligence intact either. Although the writing has been on the wall for a while, 2020 may have been a watershed moment. That was when Nature published a damning response written by 31 scientists to a study from Google Health that had appeared in the journal earlier. Critics argued that the Google team provided so little information about its code and how it was tested that the study amounted to nothing more than a promotion of proprietary tech. As opposed to sometimes obscure research, AI has the public's attention and is backed and capitalized by the likes of Google. Plus, AI's machine learning subdomain with its black box models makes the issue especially pertinent. Hence, this incident was widely reported on and brought reproducibility to the fore.


Diversity in security: How 3 organizations are making a difference—one relationship at a time

ICMCP and Women in CyberSecurity (WiCyS) announced that they will work with Target this spring to expand access to the National Cyber League (NCL) virtual competition and training program for 500 women and BIPOC individuals as a way to introduce cybersecurity and technology careers to more underrepresented students. The competition gives participants a chance to tackle simulated real-world scenarios as a way to sharpen their cybersecurity skills, explore areas of career specialization, and boost their resume. Target CISO Rich Agostino said the opportunity for his company to participate fit with its long-standing efforts to increase the diversity of its workforce and the technical professions, too. For example, Agostino has a formal mentoring program, pairing women on his team with outside executives. “I’m a huge believer that if you want to make a difference in someone’s career, you get them connected with the right people to build their network,” he says. Target, which is headquartered in Minneapolis, also works with the University of Minnesota through various programs, such as scholarships and networking opportunities, to help increase diversity among the students and, thus, the future workforce.


Filecoin Aims to Use Blockchain to Make Decentralized Storage Resilient and Hard to Censor

At the heart of Filecoin is the concept of provable storage. Simply put, to "prove" storage is to convince any listener that you have a unique replica of a certain piece of data stored somewhere. It is important that the data stored be uniquely replicated, for if not anyone can claim to have stored a long string of zeros (or some other junk data). The completely naive proof of storage would be to simply furnish the entirety of the stored data to someone demanding to see the proof. This is infeasible when the size of the data grows large. The Filecoin protocol specifies a secure cryptographic approach to proving storage. Storage providers submit such proofs once a day, which are validated by every node on the Filecoin network. The upshot is that someone storing data with a Filecoin storage provider does not have to worry about the data being secretly lost or corrupted. If that happens, it will be automatically detected by the network within a day, and the storage provider will be penalized appropriately. The Filecoin marketplace provides a platform for storage clients and providers to meet and negotiate storage deals. 


Improving understanding of machine learning for end-users

Firstly, machine learning processes need to be explainable. With the vast majority of models being trained by human employees, it’s vital that users know the information it needs to provide for the goal of usage to be reached, so that alerts of any anomalies can be as accurate as possible. Samantha Humphries, senior security specialist at Exabeam, said: “In the words of Einstein: ‘If you can’t explain it simply, you don’t understand it well enough’. And it’s true – vendors are often good at explaining the benefits of machine learning tangibly – and there are many – but not the process behind it, and hence it’s often seen as a buzzword. “Machine learning can seem scary from the outset, because ‘how does it know?’ It knows because it’s been trained, and it’s been trained by humans. “Under the hood, it sounds like a complicated process. But for the most part, it’s really not. It starts with a human feeding the machine a set of specific information in order to train it. “The machine then groups information accordingly and anything outside of that grouping is flagged back to the human for review. That’s machine learning made easy.” Mark K. Smith, CEO of ContactEngine, added: “Those of us operating in an AI world need to explain ourselves – to make it clear that all of us already experience AI and its subset of machine learning every day.


The Kris Gopalakrishnan innovation model

The areas that I chose were primarily in healthcare because the space can be transformed using technology. If India needs to provide affordable, quality, and accessible healthcare to 1.3 billion people, it has to be built on technology and a new model of the healthcare system. So from areas of the ageing brain, I also looked at the other aspects of healthcare, including preventive, curative, and palliative care.To that end, I invested in multiple companies. I set up my startup, Bridge Health Medical & Digital Solutions, and recently invested in a palliative care company, Sukino Healthcare Solutions. I have also invested in a health-tech startup called Niramai Health Analytix, besides my investments in Neurosynaptic Communications, and Cure.fit, among others. ... The perfect business is a predictable business: what you forecast, what you plan, you achieve. But it is never like that [in reality] because there are so many variables which are not under [your] control. The pandemic is an example, unfortunately, of what can go wrong. The idea of a business is to create a self-sustaining model. A startup should think about creating a profitable business. As you scale up, one option is to opt for Series C and D funding rounds and then exit by selling out to another company.


Graph-Based AI Enters the Enterprise Mainstream

Graph databases are a key pillar of this new order. They provide APIs, languages, and other tools that facilitate the modeling, querying, and writing of graph-based data relationships. And they have been coming into enterprise cloud architecture over the past two to three years, especially since AWS launched Neptune and Microsoft Azure launched Cosmos DB, respectively, each of which introduced graph-based data analytics to their cloud customer bases. Riding on the adoption of graph databases, graph neural networks (GNN) are an emerging approach that leverages statistical algorithms to process graph-shaped data sets. Nevertheless, GNNs are not entirely new, from an R&D standpoint. Research in this area has been ongoing since the early ‘90s, focused on fundamental data science applications in natural language processing and other fields with complex, recursive, branching data structures. GNNs are not to be confused with the computational graphs, sometimes known as “tensors,” of which ML/DL algorithms are composed. In a fascinating trend under which AI is helping to build AI, ML/DL tools such as neural architecture search and reinforcement learning are increasingly being used to optimize computational graphs for deployment on edge devices and other target platforms.


6 cloud vulnerabilities that can cripple your environment

Users are responsible for configurations, so your IT team needs to prioritize mastery of the various settings and options. Cloud resources are guarded by an array of configuration settings that detail which users can access applications and data. Configuration errors and oversights can expose data and allow for misuse or alteration of that data. Every cloud provider uses different configuration options and parameters. The onus is on users to learn and understand how the platforms that host their workloads apply these settings. IT teams can mitigate configuration mistakes in several ways. Adopt and enforce policies of least privilege or zero trust to block access to all cloud resources and services unless such access is required for specific business or application tasks. Employ cloud service policies to ensure resources are private by default. Create and use clear business policies and guidelines that outline the required configuration settings for cloud resources and services. Be a student of the cloud provider's configuration and security settings. Consider provider-specific courses and certifications. Use encryption as a default to guard data at rest and in flight where possible. 



Quote for the day:

"Leadership is the creation of an environment in which others are able to self-actualize in the process of completing the job." -- John Mellecker

Daily Tech Digest - February 15, 2021

Pay-per-use pricing model can boost digital initiatives

One of the more dominant reasons why organizations do not take advantage of as a service and pay per use is that their current budget model will not allow them. Another common critique, however, centers on the assumption that if you implement a pay-per-use model, you will pay more over time than if you procured the system upfront. In other words, usage-based pricing enables you to pay less upfront, but over the life of the product, those who choose to pay more upfront will pay less overall. Each offering is different. But even if you assume that a pay-per-use option would require a larger expense versus an upfront purchase, the ability to defer payments has value. In addition, pay-per-use models reduce the personnel, time and risk associated with forecasting the environment. The cost equation ultimately comes down to two questions: How confident are you with your ability to forecast application needs over the next three to four years? And are you sure there isn't something you could do that would be a better use of your time? The research data on the rising interest in a pay-per-use pricing model often focuses predominately on consumption-based pricing, which is different from a true as-a-service model. Organizations can achieve similar benefits with each approach, however.


Malware Exploits Security Teams' Greatest Weakness: Poor Relationships With Employees

Teams that are willing to brave this task manually will find a high mountain to climb. Approaching an employee about this forces all sorts of uncomfortable topics front and center. Inquisitive users may now be curious about the scope and veracity of the company's monitoring. Now that they are working from home and surrounded by family, they wonder where is the line drawn with collecting personal data, and is there an audit log for the surveillance? For many teams, the benefits of helping end users are not worth the risk of toppling over the already wobbly apple cart. So extensions like The Great Suspender linger, waiting for the right moment to siphon data, redirect users to malicious websites, or worse. This seems like a significant weakness in how IT and security teams operate. Because too few security teams have solid relationships built on trust with end users, malware authors can exploit this reticence, become entrenched, and do some real damage. Forward-looking teams are rolling up their sleeves and developing tools to solve this. Dan Jacobson, who leads corporate IT at Datadog, told me about a tool his team built to handle this type of conundrum.


Technology & Water: How 5G and IoT Can Update Our Water Infrastructure

Droughts lower reservoir levels, which creates water-quality challenges and means that every last drop of water in a system is critical. A company named Ketos has developed a suite of interconnected devices and tools that rely on sensors to get near real-time analytics on water quality and supply management. The sensors can flag toxins like lead and copper and report on water flow and pressure. And since every last drop of water is critical in a drought, there are now sensor-based leak detection technologies from two companies, Echologics and Syrinix, that allow utilities to monitor their networks in near real-time and be proactive with incident or maintenance responses. Las Vegas Valley used the Syrinix system to lower the average 20% to 40% of water lost to leaks down to just 5%. High-speed 5G networks like Verizon’s 5G Ultra Wideband could allow all this technology to work in tandem, permitting the processing and transferring of vast amounts of data from the water utility’s equipment to decision makers. In addition, more sensors mean more real-time information so that managers can be proactive instead of just reactive. On the other end of the spectrum, extreme rainfall events can cause flash flooding when rainwater fills up the sewer systems. 


Finding the Great Predictors for Machine Learning

Factor analysis is a statistical process for expressing variables in terms of latent variables called factors. Factors represent two or more variables that are highly correlated to each other. In short, factors are proxies for the model variables because of a common variance that exist because the variables correlate to each other. The benefit of factor analysis is to eliminate variables that are not influencing the model. Factors developed when transforming the dimensionality of a dataset present a more economic way to describe influential variables. The result is a reduced number of parameters for statistical models, be a regression or a machine learning model. An analyst can plan a more optimal computation of training data, allowing a machine learning model to be developed more efficiently. Factor analysis is particularly useful for surveys that contain a broad variety of comments and categorical responses. Survey responses are typically categorized, such as a Likert scale, in which respondents rate a question statement as 1 (very strongly agree) to 10 (very strongly disagree). But interpreting which answers can influence a sought answer can be tricky to establish.


Facilitating remote working and improving talent engagement using AI

There are of course many significant benefits to remote working. Employees can work where they want, when they want, and employers have access to a global pool of talent. In order for both parties to achieve maximum productivity, businesses need to not only embrace new ways of working, but also facilitate and improve the experience for their teams and enable a better work life balance. And AI supports this shift. On the face of it, the idea of AI conjuring images of robots and opaque black boxes of algorithms seems incompatible with these challenges. However, there are viable opportunities for creative and impactful AI applications that help address these risks head on. The key to allowing this to happen is for companies to embrace cloud-based work management and collaboration systems. With such a system in place and under constant use, AI technologies can learn more about your team, the work they are doing and how they are interacting with their colleagues. This is what I call being AI-ready. Naturally, it takes an insightful party who can identify an issue or a pattern of issues that relate to remote working. This could be in regards to employee productivity, efficiency or satisfaction. 


What the COVID-19 vaccine rollout means for returning to the workplace

The Challenger survey found that mask requirements will be nearly universal; almost 93% of companies said they will provide and/or require workers to wear masks. And masks will not be limited to just workers: almost 72% of companies said they will provide and/or require visitors to wear masks, the company said. In terms of other policies, the same number of companies also said they will be limiting or prohibiting gatherings in shared spaces, such as conference rooms, break rooms, lunchrooms, and bathrooms. The same percentage also said they would be maintaining social distancing protocols, with fewer people in workspaces and not allowing workers to come within six feet of each other, the Challenger survey said. And the same percentage of survey respondents planned to provide sanitizing products. Only 14% of companies in the Challenger survey said they would be providing and/or requiring workers to wear gloves. Among other precautions planned, 89% of companies said they would conduct regular deep-cleaning of all workstations/worksites, and 82% would limit or exclude visitors. Elevator use will be limited for 57% of companies and the same number said they will take the temperature of workers when they arrive at work and will survey workers to see if they have had any risk of exposure.


Microsoft asks government to stay out of its cyber attack response in Australia

It's a concept that has been applied to cyber incident responses, where additional risk is introduced during the initial phases of an ongoing crisis because the ability of subject matter experts and network defenders to adequately respond is hampered by an onslaught of information requests, speculation, and well-intended ideas from individuals or organisations when the malicious activity is yet to be fully understood by anyone. It said further complicating any such operation is the fact that the government would be doing so without a thorough understanding of the specific resources and protocols available for deployment, and that the "resources required to obtain such knowledge would be prohibitively expensive, logistically complicated, and amount to an extremely invasive governmental intervention". "As such, the danger of having a government direct a private sector entity's response without complete knowledge of the situation and the technology cannot be understated," Microsoft said. "Moreover, individual organisations are not only best positioned to respond; they also have as equal an incentive as the government to protect their own networks and maintain the trust of their customers."


Why the ‘experience of things’ lies at the heart of digital transformation

Right now, customer behaviors are changing, so customer experience leaders have to recognize the shift and keep pulse with the changing dynamics. Which means that brands have to stop trying to create static customer journey maps and realize that just like it’s the customer’s experience, it’s also the customer’s journey. And customers are on multiple journeys with multiple channels and want to engage with brands exactly how they want to engage. So, don’t try to force your journey maps on them. Remove complexity and friction in every interaction. And own it when you mess up or fail to do so. Be authentic! When it comes to employees, organizations have to transform digital workplace experiences for people with unified technology solutions that actually get down into the flow of how they work. Look at technology solutions from the perspective of what experiences they will enable or offer for people. Again, you have to remove complexity and friction. For example, communications and collaboration platforms have to enable a set of seamless and frictionless experiences so people can connect, co-create, collaborate, and build community. The pandemic seriously drove home the point of how critical real-time collaboration solutions like video and messaging are. But for employees, beyond the tech should be a focus on how they are experiencing work.


Boosting science and engineering in the cloud

Hard problems — like autonomous vehicles, rockets, and supersonic transport — benefit from engineers and scientists being able to flexibly mold infrastructure to the questions they’re hoping to answer. Boiled down, smart companies have learned that the best way to attract and nurture developer talent is not only to compensate them well, but also, and more important, to remove obstacles in their work. The rise of SaaS (with an API for whatever back-end function you need), Jamstack, Kubernetes, and all these other new technologies spreading across the enterprise software stack free developers to focus on the logic of the new application or service they are developing. They can forget about the infrastructure. Time-to-market cycles speed up. More and better services delivered much faster leads to happier, stickier customers. And more top-line revenue. In sum, it’s a partnership between developers and engineers/scientists. Developers abstract away all the infrastructure hassles and suddenly your engineers and scientists can help your business beat the competition and grab market share. It’s a match made in heaven. Or Hacker News.


Just How Much Is That Cloud Data Warehouse in the Window?

One common hybrid data warehouse scenario involves shifting specific workloads – typically, test-dev, disaster recovery, and analytic discovery – to the cloud context. An organization that employs a hybrid-multi-cloud scenario might seek to complement its on-premises data warehouse system by exploiting the desirable features or capabilities of two or more PaaS data warehouses. These might include inexpensive on-demand capacity – useful not only for analytic discovery, but for data scientists, machine learning (ML) engineers, data engineers, and other technicians who design pipelines that entail scheduling distributed data processing and data movement operations – or integration with cloud-adjacent software development, data integration, ML, artificial intelligence (AI), etc. services. Extending the data warehouse in this way does not preclude moving a large share or even a majority of on-premises workloads to the cloud, with the result that, over time, the PaaS data warehouse could draw the on-premises data warehouse (along with a constellation of multi-cloud data warehouse resources) into its orbit.



Quote for the day:

"When a man assumes leadership, he forfeits the right to mercy." -- Gennaro Angiulo

Daily Tech Digest - February 14, 2021

Outsmarting ML Biases: A Checklist

Machine learning algorithms relentlessly search for a solution. In the case of GANs, the generator and discriminator network somehow finds a way to fool each other. The result is a Deepfake. Not that deep fakes are harmless but ML is used in more critical industries such as healthcare. So when a model that is fed with an underrepresented dataset is used, the chances of misdiagnosis increases. “Each ML algorithm has a strategy to answer optimally to your question,” warned Luca. ... The different definitions makes things even more cumbersome for the data scientist. Citing the work on the impossibility of fairness, Luca also explained why some notions of fairness are mutually incompatible and cannot be satisfied simultaneously. “ There is no single universal metric for quantifying fairness that can be applied to all ML problems,” he added. No matter how fool proof the data curation process is, loopholes might creep in. So, what are these loopholes? ... When it comes to ML fairness toolkits, Google’s TensorFlow team has been on the top. The team has been developing multiple tools to assist niche areas within the realms of fairness debate. The whole debate around ML fairness is forcing companies like Google to establish an ecosystem of fairer ML practice through their tools. 


Visual Studio Code comes to Raspberry Pi

There are already some great editors, but nothing of the calibre of VS Code. I can take my $35 computer, plug it into a keyboard and mouse, connect a monitor and a TV and code in a wide range of languages from the same place. I see kids learning Python at school using one tool, then learning web development in an after-school coding club with a different tool. They can now do both in the same application, reducing the cognitive load – they only have to learn one tool, one debugger, one setup. Combine this with the new Raspberry Pi 400 and you have an all-in-one solution to learning to code, reminiscent of my ZX Spectrum of decades ago, but so much more powerful. The second reason is to me the most important — it allows kids to share the same development environment as their grown-ups. Imagine the joy of a 10-year-old coding Python using VS Code on their Raspberry Pi plugged into the family TV, then seeing their Mum working from home coding Python in exactly the same tool on her work laptop as part of her job as an AI engineer or data scientist. It also makes it easier when Mum has to inevitably help with unblocking the issues that always come up with learners.


This new open source tool could improve data quality within the enterprise

While Soda SQL is more geared toward data engineers, Soda also offers a hosted service geared toward the business user and, specifically, the chief data officer (CDO). Interest in data testing and monitoring might start with the CDO when they recognize the need to ensure quality data feeding executive dashboards, machine learning models, and more. At the same time, data engineers, responsible for building data pipelines (transforming, extracting, and preparing data for usage), just need to do some minimal checks to ensure they're not shipping faulty data. Or, you might have a data platform engineer who just wants hands-off monitoring after connecting to the data platform warehouse. In this universe, data testing and data monitoring are two distinct things. In both cases, Baeyens said, "The large majority of people with which we speak have an uncomfortable feeling that they should be doing more with data validation, data testing, and monitoring, but they don't know where to start, or it's just kind of blurry for them." Soda is trying to democratize data monitoring, in particular, by making it easy for non-technical, business-oriented people to build the data monitors.


Cybersecurity is still the #1 risk for manufacturers

We see lots of incidents, but there’s no obligation for the owners and operators to disclose the incident. The incidents that you see in the media are often just a small percentage of the incidents that you actually see in the public eye. We know of many serious incidents that you’ll never read in the headlines and for good reason, really. So, what I would do is say that cybersecurity is still a priority for many organizations. It’s their number one risk, and it’s something that they’re dealing with every day. ... Ask the question, “What is the problem that I’d like to solve, as a result of implementing digital where any other solution couldn’t?” If you’re already on that journey, I would be looking back and reviewing and saying, “Does my digital solution so far answer the question? Is it solving the problem that I want to solve as a result of a digital solution?” In a recent study, we found that less than 20% of organizations have more than a third of the employees actually trained in digital, and trained in their digital strategy as an organization. But, more than 60% of our customers actually have a digital strategy, so there’s a mismatch between customers in heading out on the digital journey, but not really taking their employees with them.


Keeping control of data in the digital supply chain

While organisations will never have as much control over a supplier’s security as they do their own, they can take steps to minimise risks. Security standards must be set out within service level agreements (SLAs), for instance, insisting that the third-party meets ISO 27001 accreditation as a minimum and ensuring that the supplier has a framework of policies and procedures governing information risk management processes. Unfortunately, this approach is rare. The UK Government’s Data Breaches Survey 2019 indicates that less than one in five businesses (18%) demanded that their suppliers have any form of cybersecurity standard or good practice guidelines in place. The issue also becomes more complicated when the sheer scale and intricacy of the average supply chain network comes into play. A firm may have its data stolen from a company three or four connections deep into the supply chain. If the breached third-party lacks the ability to detect an attack itself, a company’s data could be in the hands of criminals for months before they are finally alerted to the breach. Even if a security breach originates with a third party, it will carry just as much of a financial and reputational cost as a direct attack on the organisation’s own network.


Metaethics, Meta-Intelligence And The Rise Of AI

The notion of ethics has evolved. Decisions around right and wrong always depended on human cognition and were guided by popular sentiments and socially acceptable norms. Now, with the rise of AI, machines are slowly taking over human cognition functions, a phenomenon that author Ray Kurzweil predicts will increase over time and culminate in the advent of singularity where machines irrevocably take over humans, possibly at some distant point in the future. This trend is causing technologists, researchers, policymakers and society at large to rethink how we interpret and implement ethics in the age of AI. ... To face the challenges of the future, we also need to develop a new discipline of meta-intelligence by taking inspiration from the concepts of metadata and metaethics. Doing so will help us improve the traceability and trustworthiness of AI-driven insights. The concept of meta-intelligence has been doing the rounds of thought leadership for the last few years, especially led by people thinking about and working on singularity. The pace of technological evolution and the rise of AI has become essential for human progress today. Businesses around the world are getting impacted by the transformative power of these technologies.


Qualcomm's new X65 5G modem downloads data at lightning-fast 10Gbps speeds

With the X65, unveiled Tuesday, users will get a bump in speed but also see better battery life. Coverage will improve, latency will decrease and applications will be even more responsive than they are with Qualcomm's earlier X60 modem technology. And capacity will be "massive," letting more people on a network make reliable and crisp video calls with their doctors and face off against rivals in streaming games. With the previous-generation X60 modem, just now arriving in smartphones like Samsung's Galaxy S21, you can download data over 5G networks at up to 7.5Gbps and upload information as fast as 3Gbps, only slightly faster than the previous generation of modem. But the X60 also has the ability to aggregate the slower but more reliable sub-6 networks with the faster but finicky millimeter-wave spectrum, boosting overall performance and helping users see faster average speeds. The X65 has the same benefit. While it's unlikely that you'll regularly -- or maybe even ever -- see 10Gbps download speeds, you'll consistently see speeds that are magnitudes faster than your current 4G smartphone.


Using NGINX to Serve .NET Core, Nodejs, or Static Contents

NGINX is a high-performance HTTP server as well as a reverse proxy. Unlike traditional servers, NGINX follows an event-driven, asynchronous architecture. As a result, the memory footprint is low and performance is high. If you’re running a Node.js-based web app or .NET Core Web Application, you should seriously consider using NGINX as a reverse proxy. NGINX can be very efficient in serving static assets as well. For all other requests, it will talk to your Node.js back end or .NET Core Web Application and send the response to the client. ... Although the focus of this article is NGINX. But we will be dealing with a little bit of bash commands, NodeJS, and .NET Core. I have written about all of these topics on DZone, so you check my other articles for background information on these topics if needed. ... A reverse proxy server is a web server that accepts requests and sends them to another web server which actually creates the responses for those requests. The responses are sent back to the proxy server who forwards them to the clients who issued the corresponding requests. Nginx is a web server that can act as a reverse proxy for ASP.NET Core applications and which is also very good at serving static content.


To succeed in an AI world, students must learn the human traits of writing

AI cannot yet plan and does not have a purpose. Students need to hone skills in purposeful writing that achieves their communication goals. Unfortunately, the NAPLAN regime has hampered teaching writing as a process that involves planning and editing. This is because it favours time-limited exam-style writing for no audience. Students need to practise writing in which they are invested, that they care about and that they hope will effect change in the world as well as in their genuine, known readers. This is what machines cannot do. AI is not yet as complex as the human brain. Humans detect humour and satire. They know words can have multiple and subtle meanings. Humans are capable of perception and insight; they can make advanced evaluative judgements about good and bad writing. There are calls for humans to become expert in sophisticated forms of writing and in editing writing created by robots as vital future skills. Nor does AI have a moral compass. It does not care. OpenAI’s managers originally refused to release GPT-3, ostensibly because they were concerned about the generator being used to create fake material, such as reviews of products or election-related commentary.


Living, and Breathing Data Governance, Security, and Regulations

A top-down approach to building data and analytics platforms, based on data governance best practices and policies, is often the choice. This approach can provide a cohesive and robust solution that complies well with privacy regulations, and where all the components interact well, adhering to strict security policies. Unfortunately, it can often become cumbersome for users and slow the time-to-value, with data consumers forced to adapt their data usage and consumption to the strict compliance and security-driven protocols driving the platform. On the flip side, a bottom-up approach to data analytics is engineering and design-focused, with the goal of introducing incremental deliverables that add value to the platform in response to the user’s needs. ... Whether top-down or bottom-up, it’s critical for organizations to start with documenting privacy, security, data risks, controls, and technology needs around data access to address topics like culture of federated data ownership, adoption of self-service or collaboration across teams around critical data sets, and enterprise-wide technology standards for certain key areas.



Quote for the day:

“Believe in your infinite potential. Your only limitations are those you set upon yourself.” -- Roy T. Bennett

Daily Tech Digest - February 13, 2021

Why Your Next CIO Will Be a CPO

The role of the CIO was to deploy technology efficiently to support the company’s strategies and plans. The role of the CPO, as inherited from pure technology companies, is to develop and maintain a deep understanding of the customer and market and guide the delivery of products to best meet and monetize their needs, and do so ahead of any and all competition. The traditional CIO derives the why and what from other parts of the organization and supplies the how. Transitional versions of the CIO and the CDO and other neologisms may start to encroach on the what. But the true CPO drives the why and what—and the how if they also have engineering, or collaborates on the how with a CTO or head of development if not. Does this sound broad, even encroaching on CEO territory? Well, yes. It’s no accident that former product chiefs are the new CEOs of Google and Microsoft. So what does that mean for you if you are in an IT organization? Well, first, while your organization may or may not change the actual title to CPO from CIO, it’s important for your career to recognize when the definition of their job becomes that of what a CPO would do in a “pure” software company.


The most fundamental skill: Intentional learning and the career advantage

Stanford psychologist Carol Dweck’s popular work on growth suggests that people hold one of two sets of beliefs about their own abilities: either a fixed or a growth mindset. A fixed mindset is the belief that personality characteristics, talents, and abilities are finite or fixed resources; they can’t be altered, changed, or improved. You simply are the way you are. People with this mindset tend to take a polar view of themselves—they consider themselves either intelligent or average, talented or untalented, a success or a failure. A fixed mindset stunts learning because it eliminates permission not to know something, to fail, or to struggle. Writes Dweck: “The fixed mindset doesn’t allow people the luxury of becoming. They have to already be.”2 In contrast, a growth mindset suggests that you can grow, expand, evolve, and change. Intelligence and capability are not fixed points but instead traits you cultivate. A growth mindset releases you from the expectation of being perfect. Failures and mistakes are not indicative of the limits of your intellect but rather tools that inform how you develop. A growth mindset is liberating, allowing you to find value, joy, and success in the process, regardless of the outcome. 


The Dos and Don’ts for SMB Cybersecurity in 2021

With insider threats accounting for the largest majority of cyberattacks, SMBs need to get to the root of the problem — human behavior. Inspiring change begins with raising awareness. To do this effectively, SMBs must first reflect on their business as a whole. This means identifying every “weak point” and addressing every potential impact the business could suffer if those weak points were targeted. For instance, many SMBs operate across supply chains, which include various virtual and physical touchpoints. Because of this, if one section of the supply chain were to get hit by a cyberattack, the entire system could come crumbling down. By gathering and sharing this information in consistent organizationwide training sessions that inform and entertain, SMBs can empower their staff with deeper threat awareness and help improve their individual security posture. ... SMBs should consider bringing on external experts to regularly analyze their IT infrastructure. This will ensure that they have an unbiased opinion to the business’ needs and the strongest protection possible. Coupled with this, SMBs should regularly conduct internal security audits to better understand where hidden back doors exist across their organization.


Can Care Robots Improve Quality Of Life As We Age?

The new generation of care robots do far more than just manual tasks. They provide everything from intellectual engagement to social companionship that was once reserved for human caregivers and family members. When it comes to replicating or substituting human connection, designers must be intentional about what outcomes these robots are designed to achieve. To what degree are care robots facilitating and maximizing emotional connection with others (a personified AI assistant that helps you call your grandchildren, for example) or providing the actual connection itself (such as a robot that appears as a huggable, strokable pet)? Research suggests that an extensive social network offers protection against some of the intellectual effects of aging. There could also be legitimate uses for this kind of technology in mental health and dementia therapy, where patients are not able to care for a “real” pet or partner. Some people might also find it easier to bond or be vulnerable with an objective robot than a subjective human. Yet the risks and externalities of robots as social companions are not yet well understood. Would interacting with artificial agents lead some people to engage less with the humans around them, or develop intimacy with an intelligent robot?


IBM and ExxonMobil are building quantum algorithms to solve this giant computing problem

Research teams from energy giant ExxonMobil and IBM have been working together to find quantum solutions to one of the most complex problems of our time: managing the tens of thousands of merchant ships crossing the oceans to deliver the goods that we use every day. The scientists lifted the lid on the progress that they have made so far and presented the different strategies that they have been using to model maritime routing on existing quantum devices, with the ultimate goal of optimizing the management of fleets. ... Although the theory behind the potential of quantum computing is well-established, it remains to be found how quantum devices can be used in practice to solve a real-world problem such as the global routing of merchant ships. In mathematical terms, this means finding the right quantum algorithms that could be used to most effectively model the industry's routing problems, on current or near-term devices. To do so, IBM and ExxonMobil's teams started with widely-used mathematical representations of the problem, which account for factors such as the routes traveled, the potential movements between port locations and the order in which each location is visited on a particular route.


Palo Alto Networks Joins Flexible Firewall Party. Will Cisco Follow Suit?

In addition to migrating workloads to public clouds, companies also started demanding a cloud-like experience in their data centers. This includes consumption-based pricing and the flexibility to scale usage and add services on demand. “And what we’re now doing is bringing extreme flexibility, simplicity, and agility to the network security and software firewalls,” Gupta said. “So that’s why we’re reinventing yet again how customers buy these software firewalls and security subscriptions. And I hope that the industry will adopt that model and make it easier for customers.” However, other leading firewall vendors already adopted similar consumption-based licensing approaches. Fortinet, Forcepoint, and Check Point rolled theirs out last year. Fortinet’s programs aim to give its virtual firewall customers more flexibility in how they consume those products and security services, said Vince Hwang, senior director of products at Fortinet. ... “They can allocate the points to any virtual firewall size and type of security services in seconds without incurring a procurement cycle. These virtual firewalls and security services can be used on any cloud and anytime. Customers can manage their consumption through a central portal available through Fortinet’s FortiCare service.”


India's Blockchain Ecosystem Is a Hotbed Of Crypto Innovation

Advancements in artificial intelligence have led to the development of automated decentralized finance strategies to replace the role of traditional fund managers, monitoring the market to identify the best risk-adjusted assets to deliver investment returns. Rocket Vault Finance leverages these advanced artificial intelligence predictive analysis tools and machine learning algorithms to develop data-driven, intelligent, and automated investment strategies to minimize losses and maximize gains. They consistently achieve over 100% APY returns for stablecoin capital and avoid managing multiple crypto assets over a range of liquidity mining, staking, or other defi platforms, reducing fees and risk. Rocket Vault Finance is free to use for retail investors holding the platform’s RVF tokens, with paid services on offer to institutional investors, providing an automated hybrid alternative to riskier yield farming projects and traditional market returns. Several other projects are also contributing to the rapidly growing Indian blockchain ecosystem, expanding the value proposition as a result.


The virtual security guard: AI-based security startups have been the toast of the town, here’s why

As the threat landscape evolves, security providers have to be always on their toes, and businesses have to adopt a more unified approach to cyber risk management. Some of the biggest challenges that security and risk management leaders face are the lack of a consistent view at a micro and macro level, the ability to prioritise what’s most critical, and maintaining transparency across the organisation when it comes to cybersecurity. “SAFE is built on the premise of these challenges and our ability to provide realtime visibility at both a granular IP level and at an organisational level across people, process, technology, cybersecurity products, and third parties brings a completely new approach to enterprise cyber risk management,” says Saket Modi, Co-founder & CEO, Safe Security, a cybersecurity platforms company. ... Growing at a mindboggling 450 per cent, WiJungle, another AI-based security startup uses AI for automation at the network level and threat detection and analysis. The NetSec (network security) vendor offers a solution for office and remote network security.


How to adopt DevSecOps successfully

The DevSecOps manifesto says that the reason to integrate security into dev and ops at all levels is to implement security with less friction, foster innovation, and make sure security and data privacy are not left behind. Therefore, DevSecOps encourages security practitioners to adapt and change their old, existing security processes and procedures. This may be sound easy, but changing processes, behavior, and culture is always difficult, especially in large environments. The DevSecOps principle's basic requirement is to introduce a security culture and mindset across the entire application development and deployment process. This means old security practices must be replaced by more agile and flexible methods so that security can iterate and adapt to the fast-changing environment. ... Clearly, the biggest and most important change an organization needs to make is its culture. Cultural change usually requires executive buy-in, as a top-down approach is necessary to convince people to make a successful turnaround. You might hope that executive buy-in makes cultural change follow naturally, but don't expect smooth sailing—executive buy-in alone is not enough. To help accelerate cultural change, the organization needs leaders and enthusiasts that will become agents of change.


Web shell attacks continue to rise

The escalating prevalence of web shells may be attributed to how simple and effective they can be for attackers. A web shell is typically a small piece of malicious code written in typical web development programming languages (e.g., ASP, PHP, JSP) that attackers implant on web servers to provide remote access and code execution to server functions. Web shells allow attackers to run commands on servers to steal data or use the server as launch pad for other activities like credential theft, lateral movement, deployment of additional payloads, or hands-on-keyboard activity, while allowing attackers to persist in an affected organization. As web shells are increasingly more common in attacks, both commodity and targeted, we continue to monitor and investigate this trend to ensure customers are protected. In this blog, we will discuss challenges in detecting web shells, and the Microsoft technologies and investigation tools available today that organizations can use to defend against these threats. We will also share guidance for hardening networks against web shell attacks. Attackers install web shells on servers by taking advantage of security gaps, typically vulnerabilities in web applications, in internet-facing servers.



Quote for the day:

"Change the changeable, accept the unchangeable, and remove yourself from the unacceptable." -- Denis Waitley