Daily Tech Digest - May 19, 2021

A New Brain Implant Turns Thoughts Into Text With 90 Percent Accuracy

When asked to copy a given sentence, T5 was able to “mindtext” at about 90 characters per minute (roughly 45 words by one estimate), “the highest typing rate that has yet been reported for any type of BCI,” the team wrote, and a twofold improvement over previous setups. His freestyle typing—answering questions—overall matched in performance, and met the average speed of thumb texting of his age group. “Willett and co-workers’ study begins to deliver on the promise of BCI technologies,” said Rajeswaran and Orsborn, not just for mind-texting, but also what comes next. The idea of tapping into machine learning algorithms is smart, yes, because the field is rapidly improving—and illustrating another solid link between neuroscience and AI. But perhaps more importantly, an algorithm’s performance relies on good data. Here, the team found that the time difference between writing letters, something rather complex, is what made the algorithm perform so well. In other words, for future BCIs, “it might be advantageous to decode complex behaviors rather than simple ones, particularly for classification tasks.”


The question of QR code security

These codes can invoke various actions on a smartphone device. Here lies the threat. While a QR code may appear as though it is designed to help us sign in to a Wi-Fi network or be part of an innocent marketing campaign, the intent of it may be entirely different, with threat actors architecting and deploying malicious codes in a variety of ways. They can be used to direct the user to a malicious URL for the purpose of phishing; force a call, thereby exposing the end user’s phone number to a scammer or a potentially expensive call centre; send a payment within seconds; obtain a user’s location; or draft an email or text and populate the recipient and subject lines. Additionally, they may introduce a compromised network on a device’s preferred network list and include a credential that enables the device to automatically connect to that network. Once connected, an attacker could launch further ‘Man-in-the-Middle’ attacks. Given the variety and seriousness of these potential threats, some key statistics released by MobileIron in September 2020 provide cause for alarm. ... At the same time, however, 71% of respondents said they could not “distinguish between a legitimate and malicious QR code”.


Security doesn’t always require immediacy

Understanding the importance of long-term security investment is one thing. Putting this into practice presents a new challenge entirely. Organizations can look to assign this task internally, but ultimately, IT personnel need to focus on business goals. At the same time, it is not always reasonable to expect an IT department to keep up with developments on topics ranging from software security to cryptography to hardware architecture. By relying on vendors for security agility, organizations can outsource the technologies required for long-term protection. Enterprise service management (ESM) is one such option that looks to ensure organizations are protected for a matter of years. As digital transformation accelerates, new technologies that enable the business tend to go out of date entirely too soon after deploying. Investing and enhancing the IT department to tackle the challenge can be one way forward, but ultimately businesses need their employees to focus on their own expertise. ESM enables the adoption of new technologies whilst allowing for organizations to move at their own pace.

Allyship: Stepping up for a more inclusive IT

Work that involves self-reflection is challenging, and when it comes to addressing diversity issues and one’s position within them, some IT leaders may be unsure about how to effectively approach these sensitive topics. But being an ally can set an example for the entire company. Allies in leadership positions can demonstrate to employees that conversations about diversity, equity, and inclusion (DEI) are encouraged, and that employees from traditionally marginalized groups have a safe environment to bring up concerns they may have about their own experiences in the workplace. “I think it’s about setting an example from the top, being willing to engage in those uncomfortable conversations. A really effective ally is willing to put some skin in the game, to put some of their privilege on the line, on behalf of someone who doesn’t have that privilege,” says Malcom Glenn, director of public affairs at Better.com and former head of global policy for accessibility and underserved communities at Uber. Conversations around racism, sexism, and bigotry, both within and outside of the workplace, can be very difficult. 


Ransomware attacks are not a matter of if, but when

As the CISO role evolves and more data is stored in clouds, DeFiore said her priorities right now are moving "back to basics" and knowing where the airline's data is, applying patches and working from a stance of least privilege. Also important is reducing the attack surface, she said, and "making sure we're only publishing things to the internet that need to be there and segmenting and making sure there's no opportunities for lateral movement" inside the network. Twitter's mission is to protect public conversations, and Sethi said that requires being able to recover quickly. She also said she thinks there will be an increase in the number of security vendors suffering breaches, "which is why I say think about who you partner with." Salem said he was impressed with "how well CISOs responded during the pandemic," and moved from a world in which they had a lot of control to very little--almost overnight. The lesson the security community has learned from that experience is to be agile, he said. Looking ahead, the CISO needs to continue becoming integrated into the day-to-day operations of the business so they can be better prepared, he said.


Data Science Focus Areas for the Future

Forecasting based on historic trends has long been an essential part of conducting business. Whether using seasonal averages or industry knowledge, predicting the future was always a use-case with numerous applications. Data Scientists have a good mix of talents that line them up with the request to fit forecasting models, but they have not always had the educational background to effectively communicate their results in terms of economics and finance. ... As more fitted models start making their way to production, a knowledge gap in deploying and maintaining models emerges. A 100% accurate model that only lives on your machine is close to 0% useful. The ETL (extract, transform and load) and packaging of ML capabilities with requirements is a grey area for many current data scientists, and not always covered in training programs. Auto-ML capabilities are increasing the frequency that a useful model, designed to be consumed, will be thought of for production. In my opinion, this is an area of focus that will become the most lucrative in terms of jobs and compensation.


Cloud banking: More than just a CIO conversation

Data security concerns are top of mind for bank leaders. An important part of understanding the cloud is considering how an enterprise’s current infrastructure and capabilities may be limiting its ability to detect and address new risks and vulnerabilities—and how cloud technology can help. Security is different in the cloud because of the tools that are native to each cloud provider’s environment and the fact that cloud providers typically take responsibility for the security of the lower-level infrastructure layers. The shared security responsibility between cloud providers and the clients they host changes how organizations should anticipate and prepare for security risks. ... Cloud computing can help banks and financial services firms meet ever-evolving regulatory reporting requirements (e.g., Comprehensive Capital Analysis and Review, Solvency II) in multiple operating jurisdictions—a critically important capability in an industry where cross-border transactions are the norm. Cloud solutions can also help banks conduct intraday liquidity and risk calculations, and mine trade surveillance data to detect anti-money laundering and other fraud issues. 


5G smartphones have arrived. But for many, the reason to upgrade is still missing

Globally, across all the markets surveyed by Ericsson where 5G commercial networks are available, an average 4% of consumers own a 5G smartphone and have a 5G subscription. In the UK, an overwhelming 97% of respondents are yet to embrace next-generation connectivity. This is partly due to the lack of clarity in the marketing of 5G, which has confused customers as to what the technology is and what it can offer. Heavy tech jargon and misinformation campaigns have in some cases even put users off entirely from planning to upgrade. In the UK, for example, the number of consumers intending to upgrade to 5G next year stands at 25% – down from 27% in 2019. For the few who have switched to 5G networks, however, the experience overall seems positive, with better levels of satisfaction recorded compared to users connected with 4G LTE. Perhaps as a reflection of 5G's capabilities, Wi-Fi usage among those who have upgraded is reducing. A quarter of respondents, said Ericsson, have either decreased or stopped using Wi-Fi after switching to 5G.


Turning an uncertain 2021 into a year of opportunity

IT leaders putting new cloud and technology strategies in place to help improve their resiliency and availability must ensure that, once implemented, they don’t just sit back and hope that it works. Organisations need to know that the improvements to their business continuity plans (BCPs) will stand the test of disruption when further unexpected, and even significant planned-for, events occur further down the line. Stress-testing exercises are the best way of identifying gaps and faults in your BCP, before it is too late and remediating any problems becomes reactive rather than proactive. Organisations can do this by identifying which threats pose the biggest risk to the business – this is a vital step as every company and industry is different. Once these have been identified, each scenario can be placed in order of priority, by gauging exactly how much impact they could have on the business and how complex the plan needs to be to respond. When conducting these exercises, each team member should also be assigned a clear role. This might be as an active player in the BCP test, or it may be as an external party, for example an evaluator or observer, who can help spot any flaws and play a vital role in measuring its success.


NVIDIA Announces AI Training Dataset Generator DatasetGAN

A generative adversarial network (GAN) is a system that is composed of two deep-learning models: a generator which learns to create realistic data and a discriminator which learns to distinguish between real data and the generator's output. After training, often the generator is used alone, to simply produce data. NVIDIA has used GANs for several applications, including its Maxine platform for reducing video-conference bandwidth. In 2019, NVIDIA developed a GAN called StyleGAN that can produce photorealistic images of human faces and is used in the popular website This Person Does Not Exist. Last year, NVIDIA developed a variation of StyleGAN that can take as input the desired camera, texture, background, and other data, to produce customizable renderings of an image. Although GANs can produce an infinite number of unique high-quality images, most CV training algorithms also require that images be annotated with information about the objects in the image. ImageNet, one of the most popular CV datasets, famously employed tens of thousands of workers to label images using Amazon's Mechanical Turk.



Quote for the day:

"A leader is one who sees more than others see and who sees farther than others see and who sees before others see." -- Leroy Eimes

Daily Tech Digest - May 18, 2021

How penetration testing can promote a false sense of security

Savvy cybercriminals, not wanting to waste time nor money, look for the simplest way to achieve their goal. "Attackers have access to numerous tools, techniques, and even services that can help find the unknown portion of an organization's attack surface," suggested Gurzeev. "Similar to the 13th century French attackers of Château Gaillard, but with the appeal of lower casualties and lower cost with a greater likelihood of success, pragmatic attackers seek out an organization's externally accessible attack surface." As mentioned earlier, completely protecting an organization's cyberattack surface is nearly impossible—partly due to attack surfaces being dynamic and partly due to how fast software and hardware change. "Conventional tools are plagued by something I mentioned at the start: assumptions, habits, and biases," explained Gurzeev. "These tools all focus only where they are pointed, leaving organizations with unaddressed blind spots that lead to breaches." By tools, Gurzeev is referring to penetration testing: "Penetration testing is a series of activities undertaken to identify and exploit security vulnerabilities. ..."


Microservices Architecture: Breaking the Monolith

The first thing to know: the less communication, the better relations. It’s very easy and very tempting to create lots of services, that are very easy to test from a singular standpoint, but as a whole, your system will get really complicated and tangled. It makes things difficult to track should a problem arise because you’ve got this enormous entanglement, and it may be hard to identify where the root of the problem lies. Another important consideration is to enter events into the queue. Many times we have been told that we cannot break these into separate services because this thing has to be perfectly synchronized for events that happen in the next steps. Usually, that’s not true. Thanks to the queueing system and topic messaging systems that exist today, there are lots of ways to break synchronization. It’s true that you are adding an extra layer that could bring some latency problems but in the end, being able to break all the synchronicity will probably end up improving your experience. ... It is very easy to keep creating microservices on the cloud, but if you don’t have a clear plan that also makes it very easy to lose track of your project’s budget. 


Chip shortage will hit IT-hardware buyers for months to years

Cisco CEO Chuck Robbins told the BBC in April: “We think we’ve got another six months to get through the short term. The providers are building out more capacity. And that’ll get better and better over the next 12 to 18 months.” The problem could last even longer, others say. “The supply chain has never been so constrained in Arista history,” Arista CEO Jayshree Ullal told analysts at the company’s recent financial briefing. “To put this in perspective, we now have to plan for many components with 52-week lead time.” “We have products with extremely large lead times that we plan ahead for. And I would be remiss, if I didn’t say we while we have some great partners, that the semiconductor supply chain is still constrained,” Ullal said. “Our team have taken some very important steps, to build out our inventory for some of these long-lead-time components, but we could use a lot more parts than we still have.” In its first quarter earnings call Juniper Networks CFO, Ken Miller, told analysts that ongoing supply constraints are likely to continue for a year or more.


Considering the ethics of tech for a more responsible future

“At All Tech Is Human, we recently released a report on improving social media, and after interviewing a diverse sample of 42 individuals from across civil government, government and industry, we realised that we don’t have an agreed future as to where social media should be headed. “This showed that we need more input from diverse groups to determine a better forward action.” The All Tech Is Human founder went on to identify data extraction as the biggest issue regarding the power of social media, due to most outlets being based on a model of obtaining user data, which benefits advertising over apps. “The fact that social media practices are more geared towards advertisers than communication creates most of the problems we see,” Polgar continued. “This is where regulations are important. These platforms are trying to maximise their profitability inside the parameters of legality.” According to Polgar, while tech companies need to consider the need to crack down on misinformation around topics such as Covid-19, the other side of the coin manifests itself in the argument that social media outlets don’t have the moral authority to remove these posts from the platform.


Customer service is not customer experience (and vice versa)

Because customer experience is strategic, not tactical, you need to know where the value is coming from, and where you’re throwing good money after bad. First, identify your valuable customers, advises Strategex’s Nash, then go deeper to analyze why they are valuable. Are they spending money broadly or deeply, or both? “We have years and years of data to prove the 80/20 rule — that 80 percent of your revenue comes from 20 percent of your customers.” More than that, she adds, it’s not uncommon for the top 5 percent of customers to produce half the revenue. “Words get people’s attention; data causes action,” she notes. This analysis matters because the resources spent servicing unprofitable customers can be a distraction from work that should be done to create a great experience for those who matter most to your business. “Once you know who your top customers are, you can create a customer experience for them, with the appropriate expectations on their side and effort on the employee side,” Nash says. And you can set different experience and service expectations for less-valuable customers. This can be as simple as offering clearly branded tiers of service or membership.


Interview With Srikanth Phalgun Pyaraka, Chartered Data Scientist

While working with business stakeholders to integrate data and analytics into business models, we have faced multiple challenges. One of the significant challenges that we face in most organisations is Business Intelligence reporting to the next level of enabling predictive or prescriptive analytics decision making. This is what we call an analytic chasm. Organisations should tend to move from the analytics chasm with the help of change in the mindset of decision-makers. The main focus should be on leveraging technology to competitive power differentiation and not competitive parity. Greater emphasis should be to build infrastructure and data analytics environments to support data-driven business initiatives. ... Chartered Data scientist designation is the highest distinction in the data science profession. The exam looks for skills including computer programming, including R and Python; Mathematics, especially statistics and probability; Analytical Methods such as EDA, ML algorithms; Advance Analytics including deep learning, computer vision, NLP; and Business Analytics at international standards.


10 Emerging Cybersecurity Trends To Watch In 2021

Extended detection and response (XDR) centralizes security data by combining security information and event management (SIEM); security orchestration, automation, and response (SOAR), network traffic analysis (NTA), and endpoint detection and response (EDR). Obtaining visibility across networks, cloud and endpoint and correlating threat intelligence across security products boosts detection and response. An XDR system must have centralized incident response capability that can change the state of individual security products as part of the remediation process, according to research firm Gartner. The primary goal of an XDR platform is to increase detection accuracy by corelating threat intelligence and signals across multiple security offerings, and improving security operations efficiency and productivity, Gartner said. XDR offerings will appeal to pragmatic midsize enterprise buyers that do not have the resources and skills to integrate a portfolio of best-of-breed security products, according to Gartner. Advanced XDR vendors are focusing up the stack by integrating with identity, data protection, cloud access security brokers, and the secure access service edge to get closer to the business value of the incident.


How the API economy is powering digital transformation

“APIs allow businesses to more efficiently unify and structure data from across multiple communication platforms and leverage that data to build more productive workflows, bring products and features to market faster, and create modern user experiences that drive adoption and retention,” Polyakov told VentureBeat. “APIs allow businesses to achieve all of this without having to commit large amounts of time and resources, allowing product and engineering teams to focus on other critical issues and business goals.” However, Polyakov notes that many of the best APIs are those that handle and transfer lots of rich data, meaning “proper security protocols and compliance certifications” are vital. “Without proper assessments or an understanding of good design for security, businesses can accidentally expose sensitive information or unintentionally open themselves up to malicious inputs, compliance violations, and more,” Polyakov said. ... “The API economy has empowered companies to be more successful — whether it’s through leveraging third-party APIs to improve business processes, attracting and retaining customers, or producing an API as a product,” Bansal told VentureBeat.


How 2020 Shaped Transformation for Public Sector CIOs

Digital citizen services saw increased demand with people needing 24/7 access to critical services and information. What was once considered more of a “nice-to-have” became an absolute necessity. With some normalcy returning, local governments must maintain this momentum toward modernization with digital citizen services at the forefront of their digital transformation plans. Remote work in the public sector increased efficiency, cost-savings and led to more empowered and engaged government employees. A survey found remote government employees 16% more engaged, 19% more satisfied and 11% less likely to leave their agencies than non-remote workers. Much like the private sector, when deciding on what a post-pandemic workplace looks like, local governments need to consider a hybrid environment and continue providing infrastructure and support for remote work. Advanced cybersecurity is far from a new priority for local governments. But the rapid digitization of the public sector over the past year -- increased digital services and data, mobilization of the workforce, cloud migration, and more -- made cybersecurity an even bigger focus. 


Hiring remote software developers: How to spot the cheaters

There is a subtle balancing act in providing an assessment platform that is efficient at sensing fraud, but at the same time provides a good experience for honest test takers. The most successful assessment platforms usually apply a two-pronged approach by mixing and matching fraud mitigation with fraud detection. Signing the code of honor is an example of graceful and efficient mitigation tactics, rooted in academic research (Ariely, 2007) and confirmed by years of practice. It has been scientifically established that being reminded of moral issues makes an individual less prone to cheat. It is always wise to protect the platform’s evaluation content. Quality vendors limit the time and number of exposures of the same assessment content, actively monitor scores and pass rates to preempt task depletion and constantly crawl the internet to identify leaked tasks and solutions. Test randomization, a platform feature that enables automated on-the-spot test creation from a set of preconfigured equivalent tasks, is helpful in mitigating cheating, since it’s harder to game a system that is less predictable.



Quote for the day:

"If stupidity got us into this mess, then why can't it get us out?" -- Will Rogers

Daily Tech Digest - May 17, 2021

Paying it forward: why APIs are key to easing technical debt

Organisations must find a way to reduce their technical debt, by replacing tight couplings with a more flexible integration layer. As such, API strategies are becoming more important than ever. APIs create a loose coupling between applications, data, and devices, so organisations can make changes quickly without impacting their existing integrations or the functionality of digital services. It therefore becomes easier to accelerate innovation and deliver new products and services faster, without increasing the risk of business disruption or spiralling costs. One organisation putting this into practice is Allica Bank, a new, digital-only bank that exclusively caters to SMEs. Rather than build its offerings around one core platform as traditional banks do, Allica is built around a more flexible integration layer, underpinned by APIs. When it needs to expose the data from a certain application or system, it does so via an API, without the need to write any code to connect the systems in question. This makes for a much more agile operation, as new services can be switched in and out as needed. For Allica, this level of agility has been critical to its ability to meet its customers’ needs for urgent access to credit in 2020.


Machine Learning and the Coming Transformation of Finance

As financial firms get more comfortable with machine learning in their most advanced departments, they’ll start to adapt it in other areas to deal with the vast treasure trove of structured and unstructured data pouring into their data lakes. Whether that’s trying to give customers better answers when they call with questions, or quickly figuring out whether someone is qualified for a loan, machine learning will seep into every aspect of the financial enterprise. It will also revolutionize the areas where it’s already dominate, trading and fraud. None of this comes without risks though. Rule bases systems are at least easier to understand. People can inspect and interpret hand-coded rules but with machine learning the systems are more opaque and we don’t always know why a machine made the decision it made. Even worse, as governments take their first stabs at regulation, it’s clear from early drafts of bills in the EU, that regulators don’t fully understand how machine learning models work and they’ve drafted vaguely worded bills that will be open to interpretation and create additional compliance complexity.


Generating greater public value from data

Data ownership is a complex concept. If I have data about you in my database, who owns that data? Does it depend on what kind of data it is? For example, if I know you just bought a new boat, can I sell that information? What if I know you were just diagnosed with cancer? According to a 2018 survey, 90% of respondents believe it is unethical to share data about them without their consent, highlighting growing concerns surrounding data control and ownership. Bearing this in mind, and recognizing the importance of building citizen trust, some governments have begun to establish frameworks to give citizens greater control over their data. For instance, in January 2020, Indonesia’s government submitted a bill to parliament that would require explicit consent to distribute personal data such as name, nationality, religion, sexual orientation, or medical records. Violators could face up to seven years in jail for sharing citizen data without consent. Another governance approach is shown by the UK National Health Service (NHS). In the COVID-19 app of the UK NHS, the Department of Health and Social Care, NHS England, and NHS Improvement are the designated data controllers.


Cyber investigations, threat hunting and research: More art than science

There is a reason why this is a requirement to become one of the most successful. Security defenders need to be 100% perfect at protecting 100% of the countless entry points 100% of the time in order to prevent breaches, while on the other hand, hackers only need one exploit that works. While that adage is considerably oversimplified, the moral is true: Being a defender means keeping up with an impossible firehose of changing technologies, controls, and attacks. Not to mention, your advisories are not pieces of code – they are creative and motivated people. And let’s be honest, hacking is fun! When you are engaged in something fun, you likely have heightened motivation and creativity, so only those who approach the challenge of defense work with the same level of play and creativity as hackers will rise to the top of their team, company, and industry. The reflections of this “playful” approach can be seen in quotes from some of the most famous contemporary artists of today. “When someone sees one of my paintings, I want them to really feel the place that I’m depicting. And so, my desire is that they’re going to want to travel into that painting and become part of it.” – James Colema


Appian Launches New Low-Code Automation Platform For Enterprises

Interestingly, the launch of its new low-code automation comes when enterprises are looking for quick solutions to deploy AI-powered applications and smooth workflow automation across departments with limited resources and agile processes. Today, low-code, no-code technology platforms have emerged as a go-to model for businesses. Several players, including Appian, Microsoft, Amazon, Pega, and ServiceNow are working on products and ideas to ease the burden for enterprises. In India, companies like Infosys, HCL Technologies and Tech Mahindra, alongside various startups, are also working on this technology. “This is the time for low-code automation platforms,” said Matt Calkins, Appian founder and CEO. “We have just started a new decade, but low-code is how applications are built in the future. It’s inevitable,” said Calkins. A cloud-based, no-code application development platform Quixy’s CEO Gautam Nimmagadda told AIM that no-code would allow more companies to participate in software development, allowing professional developers to focus on advanced and specialised areas.


How to Take AIOps from a Promising Concept to a Practical Reality

AIOps offers organizations the potential to improve IT team productivity and cost while fortifying overall business stability and resilience. The technology also supplies the ability to gain deep insights on customer experiences and journeys. "AIOps can bring predictive abilities to operations so organizations are able to adjust to changes," Velayudham said. "By automating the mundane work and uncovering insights from large datasets that aren’t possible to sift through manually, AIOps can increase IT team efficiency," he added. By taking a strategic and intelligent approach to IT automation, businesses can also accelerate their digital transformation efforts. "IT automation can also eliminate repetitive manual tasks, freeing up your IT team to address more strategic tasks, making the entire team more valuable to the business," Mirani said. The AIOps vendor field is growing rapidly. This fact should help ease AIOps adoption, but it's also creating some confusion for potential customers as they find themselves sorting through various tools and approaches. 


Kubernetes: 6 open source tools to put your cluster to the test

Kube-monkey is a version of Netflix’s famous (in IT circles, at least) Chaos Monkey, designed specifically to test Kubernetes clusters. Chaos Monkey essentially asks: “What happens to our application if this machine fails?” It does this by randomly terminating production VMs and containers. As a manifestation of the broader discipline of chaos engineering, the core idea behind the open source tool is to foster resilient, fault-tolerant applications by treating failure as a given in any environment. ... Kubernetes has lots of native security controls that require proper configuration and fine-tuning over time. The community commitment to the platform’s security has also led to the creation of various commercial and open source tools for further ensuring the security of your applications and environment. Kube-hunter is a good example: it’s an open source tool for pen-testing your cluster and its nodes. Basically, penetration testing is to security what chaos testing is to resiliency. By assuming that you have weaknesses that an attacker can exploit (because you almost certainly do), you more proactively build security into your systems. You’re attacking yourself to discover holes before someone else does.


We need to design distrust into AI systems to make them safer

The negatives are really linked to bias. That’s why I always talk about bias and trust interchangeably. Because if I’m overtrusting these systems and these systems are making decisions that have different outcomes for different groups of individuals—say, a medical diagnosis system has differences between women versus men—we’re now creating systems that augment the inequities we currently have. That’s a problem. And when you link it to things that are tied to health or transportation, both of which can lead to life-or-death situations, a bad decision can actually lead to something you can’t recover from. So we really have to fix it. The positives are that automated systems are better than people in general. I think they can be even better, but I personally would rather interact with an AI system in some situations than certain humans in other situations. Like, I know it has some issues, but give me the AI. Give me the robot. They have more data; they are more accurate. Especially if you have a novice person. It’s a better outcome. It just might be that the outcome isn’t equal.


Performance Testing of Microservices

You need a distinct strategy for testing microservices as they go behind a distinct architecture and have several integrations with other microservices within the individual organizations and from the outside world (third-party integrations). Moreover, these necessitate a huge amount of collaboration among various squads or teams developing independent microservices. Additionally, they are independent purpose services and are deployed separately as well as regularly. As we have seen benefits of microservices in brief, it also own complicated challenges to cater to. As manifold services are interrelating with each other with REST-based endpoints, the performance deprivation can bang a business to sink. For instance, an eCommerce app with 100ms shaved off on its shopping cart or product listings can straight influence the bottom line of order placement. Otherwise, for an event-driven product with frequent contact between customers, even a hindrance of a few milliseconds can annoy the client and could cause them to go somewhere else. Whatever the situation may be, reliability and performance is the significant element of software development, so businesses must spend the necessary effort time and into performance tests.


Agility Broke AppSec. Now It's Going to Fix It.

AppSec teams are charged with making sure software is safe. Yet, as the industry's productivity multiplied, AppSec experienced shortages in resources to cover basics like penetration testing and threat modeling. The AppSec community developed useful methodologies and tools — but outnumbered 100 to 1 by developers, AppSec simply cannot cover it all. Software security is a highly complex process built upon layers of time-consuming, detail-oriented tasks. To move forward, AppSec must develop its own approach to organize, prioritize, measure, and scale its activity. Agile approaches and tools emerged from recognizing the limitations of longstanding approaches to software development. However, AppSec's differences mean it can't simply copy software development. For example, bringing automated testing into CI/CD might overlook significant things. First, every asset delivered outside CI/CD will remain untested and require alternative AppSec processes, potentially leading to unmanaged risk and shadow assets. Second, when developers question the quality of a report, it creates friction between engineers and security, jeopardizing healthy cooperation.



Quote for the day:

“Make your team feel respected, empowered and genuinely excited about the company’s mission.” -- Tim Westergren

Daily Tech Digest - May 16, 2021

Scientist develops an image recognition algorithm that works 40% faster than analogs

Convolutional neural networks (CNNs), which include a sequence of convolutional layers, are widely used in computer vision. Each layer in a network has an input and an output. The digital description of the image goes to the input of the first layer and is converted into a different set of numbers at the output. The result goes to the input of the next layer and so on until the class label of the object in the image is predicted in the last layer. For example, this class can be a person, a cat, or a chair. For this, a CNN is trained on a set of images with a known class label. The greater the number and variability of the images of each class in the dataset are, the more accurate the trained network will be. ... The study's author, Professor Andrey Savchenko of the HSE Campus in Nizhny Novgorod, was able to speed up the work of a pre-trained convolutional neural network with arbitrary architecture, consisting of 90-780 layers in his experiments. The result was an increase in recognition speed of up to 40%, while controlling the loss in accuracy to no more than 0.5-1%. The scientist relied on statistical methods such as sequential analysis and multiple comparisons.


Comprehensive Guide To Dimensionality Reduction For Data Scientists

The approaches for Dimensionality Reduction can be roughly classified into two categories. The first one is to discard less-variance features. The second one is to transform all the features into a few high-variance features. We will have a few of the original features in the former approach that do not undergo any alterations. But in the later approach, we will not have any of the original features, rather, we will have a few mathematically transformed features. The former approach is straightforward. It measures the variance in each feature. It claims that a feature with minimal variance may not have any pattern in it. Therefore, it discards the features in the order of their variance from the lowest to the highest. Backward Feature Elimination, Forward Feature Construction, Low Variance Filter and Lasso Regression are the popular techniques that fall under this category. The later approach claims that even a less-important feature may have a small piece of valuable information. It does not agree with discarding features based on variance analysis.


How security reskilling and automation can mend the cybersecurity skills gap

To understand the high demand for cybersecurity skills, consider how much has changed in IT—especially in the last year. From a rapid increase in cloud migrations to a huge shift toward remote work, IT teams everywhere have been forced to adapt quickly to keep up with the changing needs of their organizations. However, the rapid expansion of technology and explosion of remote work has kept IT busy enough. They don’t have the capacity to adequately handle responsibilities ranging from regular security hygiene to the patching and forensics surrounding the latest zero-day threat. ... With the difficulty of recruiting, hiring, and onboarding new cybersecurity experts from a small talent pool, consider investing in retraining your workforce to organically grow needed cybersecurity skills. Besides avoiding a lengthy headhunting process, this also makes clear economic sense. According to the Harvard Business Review, it can cost six times as much to hire from the outside rather than build talent from within. In addition, focusing on retraining opens up career progression for your best employees—building their skills, morale, and loyalty to your organization.


The Flow System: Leadership for Solving Complex Problems

One of the most significant limitations in today’s leadership practices is the lack of development. Most leadership training is disguised as leader education. These training efforts also do not include time for emerging leaders to practice their newly learned leadership skills. Without practice and the freedom to fail during the developmental stages, it is nearly impossible for emerging leadership to master skill. Another problem with leadership development is that most programs deliver training to everyone the same way. Most leadership development programs were initially designed as “one-size-fits-all” training. In The Flow System, we make great efforts to design leadership and team development around the contextual setting. We view leadership as a collective construct, not an individual construct. We incorporate the team as the model of leadership, and individual team members as leaders using a shared leadership model. This collective becomes the organization’s leadership model, from the lower ranks up to the executive level. 


5 Practices To Give Great Code Review Feedback

The first thing to do is to have a very clear context about the PR. Sometimes we want to go fast; we think we already know what our colleague wanted to do, the best way to do it, and we just skim through the description. However, it is much better to take some time and read the title and description of the PR carefully, especially the latter because we could find all the assumptions that guided our colleague. We could find a more detailed description of the task and perhaps a good description of the main issue they faced when developing it. This could give us all the information we need to perform a constructive review, taking into consideration all the relevant aspects of it. ... When reviewing a piece of code, focus on the most important parts: the logic, the choices of data structure and algorithms, whether all the edge cases have been covered in the tests, etc. Many of the other syntax/formatting elements should be taken care of by a tool, such as a linter, a formatter, a spell checker. etc. There is no point in highlighting them in a comment. The same idea holds on how the documentation is written. There should be some conventions, and it is OK to tell the contributor if they are not following them.


Machine learning does not magically solve your problems

Looking at the neural network approach we see that some of the manual tasks are absorbed into the neural network. Specifically, feature engineering and selection are done internally by the neural network. On the flipside, we have to determine the network architecture (number of layers, interconnectedness, loss function, etc) and tune the hyperparamers of the network. In addition, many other tasks such as assessing the business problem still need to be done. As with TSfresh/Lasso, the neural network is an approach that works well in a specific situation, and is not a quick nor automated procedure. A good way to frame to change from regression to the neural network is that instead of solving the problem manually, we build a machine that solves the problem for us. Adding this layer of abstraction allows us to solve problems we never thought we could solve, but that still takes a lot of time and money to create. ... Machine learning has some magical and awe-inspiring applications, extending the range of applications we thought possible to be solved using a computer. However, the awesome potential of machine learning does not mean that it automatically solves our challenges.


Five Tips For Creating Design Systems

A product experience that delights is usually designed with persistent visuals and consistent interaction. Users want to feel comfortable knowing that no matter where they navigate, they won’t be surprised by what they find. Repetition, in the case of product design, is not boring, but welcome. Design systems create trust with users. Another benefit is the increased build velocity from design and engineering teams. As designers, we are tasked with solving problems. We want to create a simple understanding of how our users can accomplish tasks in a workflow. Of course, we are tempted at times to invent new patterns to solve design problems. We often forget, in the minutia of design iterations, that we’ve already solved a particular problem in a prior project or in another part of the current product. This inefficiency can lead to wasted time, especially if those existing patterns and components have not been documented. In a single-person design team, the negative effects may not be as visible, but one can imagine the exponential nature of a larger design team consistently duplicating existing work or creating new patterns that, ultimately, create an inconsistent user experience.


A Gentle Introduction to Multiple-Model Machine Learning

Typically, a single output value is predicted. Nevertheless, there are regression problems where multiple numeric values must be predicted for each input example. These problems are referred to as multiple-output regression problems. Models can be developed to predict all target values at once, although a multi-output regression problem is another example of a problem that can be naturally divided into subproblems. Like binary classification in the previous section, most techniques for regression predictive modeling were designed to predict a single value. Predicting multiple values can pose a problem and requires the modification of the technique. Some techniques cannot be reasonably modified for multiple values. One approach is to develop a separate regression model to predict each target value in a multi-output regression problem. Typically, the same algorithm type is used for each model. For example, a multi-output regression with three target values would involve fitting three models, one for each target.


Why is Business Intelligence (BI) important?

The term “data-driven decision-making” doesn’t fully encapsulate one of its important subtexts: People almost always mean fast decisions. This distinction matters because it’s one of the capabilities that modern BI tools and practices enable: Decision-making that keeps pace (or close enough to it) with the speed at which data is produced. “Data is now produced so fast and in such large volumes that it is impossible to analyze and use effectively when using traditional, manual methods such as spreadsheets, which are prone to human error,” says Darren Turner, head of BI at Air IT. “The advantage of BI is that it automatically analyzes data from various sources, all accurately presented in one easy-to-digest dashboard.” Sure, everyone talks about the importance of speed and agility across technology and business contexts. But that’s kind of the point: If you’re not doing it, your competitors almost certainly are. ... “In a marketplace where the volume of data is ever-increasing, the ability for it to be processed and translated into sound business decisions is essential for better understanding customer behavior and outperforming competitors.”


What Is NFT (Non Fungible Tokens)? What Does NFT Stand For?

The bulk of NFTs are stored on the Ethereum network.. Certain NFTs, which store additional information that allows them to function differently are also supported by the blockchain. Ethereum, like bitcoin and dogecoin, is a cryptocurrency, but the blockchain frequently accepts such non-fungible tokens (NFTs), which store additional information that enables them to function differently Person tokens that are part of the Ethereum network that have extra information are known as NFTs. The extra content is the most important feature, as it allows them to be displayed as art, music, video (and so on) in JPGs, MP3s, photographs, GIFs, and other formats. They can be bought and sold like any other medium of art because they have value – and their value is largely dictated by supply and demand, much like physical art. But that doesn’t suggest, in any way, that there is just one digital version of NFT art available to purchase. One can obviously replicate them, much like the art prints of originals are used, bought and sold, but they won’t be of the same value as the original one.



Quote for the day:

"It is not fair to ask of others what you are not willing to do yourself." -- Eleanor Roosevelt

Daily Tech Digest - May 15, 2021

Hybrid multiclouds promise easier upgrades, but threaten data risk

Lack of ongoing training and recertification. Such training helps to reduce the number and severity of hybrid cloud misconfigurations. As the leading cause of hybrid cloud breaches today, it’s surprising more CIOs aren’t defending against misconfigurations by paying for their teams to all get certified. Each public cloud platform provider has a thriving sub-industry of partners that automate configuration options and audits. Many can catch incorrect configurations by constantly scanning hybrid cloud configurations for errors and inconsistencies. Automating configuration checking is a start, but a CIO needs a team to keep these optimized scanning and audit tools current while overseeing them for accuracy. Automated checkers aren’t strong at validating unprotected endpoints, for example. Automation efforts often overlook key factors. It is necessary to address inconsistent, often incomplete controls and monitoring across legacy IT systems. That is accompanied by inconsistency in monitoring and securing public, private, and community cloud platforms. Lack of clarity on who owns what part of a multicloud configuration continues because IT and the line of the business debate who will pay for it.


Cybersecurity Oversight and Defense — A Board and Management Imperative

Although it is common to have the cyber risk oversight function fall to the audit committee, this should be carefully considered given the burden on audit committees. An alternative to consider, depending on the magnitude of the oversight responsibility, is the formation of a dedicated, cyber-specific board-level committee or sub-committee. At the same time, because cybersecurity considerations increasingly affect all operational decisions, they should be a recurring agenda item for full board meetings. Companies that already have standalone risk or technology committees should also consider where and how to situate cybersecurity oversight. The appointment of directors with experience in technology should be evaluated alongside board tutorials and ongoing director education on these matters. Robust management-level systems and reporting structures support effective board-level oversight, and enterprise-wide cybersecurity programs should be re-assessed periodically, including to ensure they flow through to individual business units and legacy assets as well as newly acquired or developed businesses.


Linux and open-source communities rise to Biden's cybersecurity challenge

This is not just a problem, of course, with open-source software. With open-source software, you can actually see the code so it's easier to make an SBOM. Proprietary programs, like the recently, massively exploited Microsoft Exchange disaster, are black boxes. There's no way to really know what's in Apple or Microsoft software. Indeed, the biggest supply-chain security disaster so far, the Solarwinds catastrophic failure to secure its software supply chain, was because of proprietary software chain failures. Besides SPDX, the Linux Foundation recently announced a new open-source software signing service: The sigstore project. Sigstore seeks to improve software supply chain security by enabling the easy adoption of cryptographic software signing backed by transparency log technologies. Developers are empowered to securely sign software artifacts such as release files, container images, and binaries. These signing records are then kept in a tamper-proof public log. This service will be free for all developers and software providers to use. The sigstore code and operation tooling that will make this work is still being developed.


DevOps didn’t kill WAF, because WAF will never truly die

WAFs are specific to each application and, therefore, require different protections. The filtering, monitoring, and policy enforcement (such as blocking malicious traffic) provide valuable protections but carry cost implications and consume computing resources. In a DevOps-fed cloud environment, it’s challenging to keep WAFs current with the constant flow of updates and changes. Introducing security into the CI/CD pipeline can solve that problem, but only for those apps being developed that way. It’s impossible to build security sprints into old third-party apps or applications deployed by different departments. The mere existence of those apps presents risk to the enterprise. They still need to be secured, and WAFs are likely still the best option. It’s also important to remember that no approach to cybersecurity will be perfect and that an agile DevOps methodology won’t be enough on its own. Even in an environment believed to be devoid of outdated or third-party apps, you can never be sure what other groups are doing or deploying—shadow IT is a persistent problem for enterprises.
 

Top 10 Latest Research On Brain Machine Interface

The Brain Machine interface is a study that captures this neural process to control external software and hardware. Though the technology is at its primary stages, these are the current possibilities of the Brain-Machine Interface. Brain-controlled wheelchair: A technique to ease the life of disabled people. With concentration, users will be able to navigate the wheelchair through familiar environments indoor. Brain-controlled Robotic ARM: A brainwave sensor is used to capture brain signals every time the user blinks, concentrates, meditates to put to use. The Robotic Arm is moved with an EEG sensor based on the brain data collected. Brain Keyboard: Oftentimes, paralyzed people fail to communicate with the surrounding environment. But that can be solved with a Brain Keyboard. EEG sensors will read the eye blink and the system will translate the text on display. Brain-controlled Helicopter: Can you imagine flying a helicopter with your brain? It’s possible. The helicopter can fly according to the pilot’s concentration and meditation, which will navigate the helicopter up and down. Brain-controlled password authentication: EEG can be applied in biometric identification as brain signals and patterns are unique for every individual.


Are business leaders scared by the public cloud?

Security and compliance are the biggest barriers to adoption, respectively. However, for the majority of business leaders, the cloud is more secure and easier to maintain compliance than on-premises. Only a tiny minority of decision-makers find that the public cloud is less capable in terms of both security and data compliance than on-premises. Although superior in terms of capability, switching to cloud-native security and compliance models is a struggle for some enterprises. However, almost everyone is planning on growing their cloud program… despite the concerns some have expressed about vendor lock-in. The vast majority of enterprises will continue on with their cloud journey, although around a third are predicted to go full-steam ahead, migrating “as quickly as is feasible”. This is by no means the case for all enterprises, though. Around half wish to migrate more cautiously. Vendor lock-in appears to also be a major issue for most enterprises. The majority of enterprises express that they are significantly concerned by the consequences of putting their all eggs in one cloud provider’s basket. Only a fearless few do not see this as a concern, and this is the way to go.


Why data and machine intelligence will become the new normal in insurance

In the next 3-5 years, the digital insurance consumer will likely remain the millennials, with higher levels of income and education. It is important though to not assume homogeneity and develop solutions based on lazily assessed group characteristics. Personalisation is more important now than it ever has been. Beyond functionality and ease of access, emotions and personal growth are key drivers in consumption behaviour and like in any other group, there are a diverse set of expectations and desires amongst this group. Tailoring services and online buying journeys to the individual rather than the group is paramount; in the same way that offering life insurance immediately following a bereavement could be viewed as inappropriate, so too an offer of a social insurance be offensive to a staunch individualist. Certain benefits, although appealing on the surface to members of 'the group' may not work at a more nuanced level – a donation with every policy bought to an environmental charity will not appeal to every millennial. 


Paying a Ransom: Does It Really Encourage More Attacks?

Although Phil Reitinger, a former director of the National Cyber Security Center within the Department of Homeland Security, doesn’t expect the pipeline company's apparent ransom payment to serve as a catalyst for other ransomware gangs, he acknowledges the impact the attack had on pipeline operations could encourage those interested in causing similar mayhem. "I don't see paying this particular ransom as that different from others, in the sense of opening up critical infrastructure as a target," he says. "Indeed, I expect there to be a reduction in criminal attacks on critical infrastructure as this ransomware gang now has a big target on its back," says Reitinger, who's now president and CEO of the Global Cyber Alliance. "However, the effectiveness of the attack may well increase the incentive for other actors who want to disrupt rather than cash a check." The ransomware-as-a-service gang behind DarkSide announced Thursday it was shutting down its operation after losing access to part of its infrastructure. A ransomware attack by a nation-state or highly competent gang, such as DarkSide, is almost impossible to stop, Maor says. But he points out that such attacks aren't easy to pull off.


Using Data as Currency: Your Company’s Next Big Advantage

Today’s world is increasingly data-driven, and companies are amassing unique data assets that have numerous and valuable implications for analytics, modeling, insights, personalization and targeting purposes. Most companies don’t know how to turn their mountains of data into real value for their business or their customers. But the companies that do are rewarded with market valuations that far exceed their peers. Amazon, Nike, Progressive, Hitachi, and others recognize that winning in a digitally driven world is about using data as currency, and the CIO and CTO are key to making that happen. But what does “data as currency” mean? For a while now, we have heard a number leaders claim, “data is the new oil”. ... Data’s flexibility arguably gives data even more value than oil and other currencies, assuming companies can leverage it properly. For instance, many product companies sit on customer interaction data that could better predict demand to optimize their manufacturing output and supply chains. Internal data on employee job assignments, self-driven trainings, and micro-experiences could optimize talent versus upcoming opportunities.


Implementing Microservicilities with Quarkus and MicroProfile

In a microservice architecture, we should develop with failure in mind, especially when communicating with other services. In a monolith application, the application, as a whole, is up or down. But when this application is broken down into a microservice architecture, the application is composed of several services and all of them are interconnected by the network, which implies that some parts of the application might be running while others may fail. It is important to contain the failure to avoid propagating the error through the other services. Resiliency (or application resiliency) is the ability for an application/service to react to problems and still provide the best possible result. ... Elasticity (or scaling) is something that Kubernetes had in mind since the very beginning, for example running kubectl scale deployment myservice --replicas=5 command, the myservice deployment scales to five replicas or instances. Kubernetes platform takes care of finding the proper nodes, deploying the service, and maintaining the desired number of replicas up and running all the time.



Quote for the day:

"To get a feel for the true essence of leadership, assume everyone who works for you is a volunteer." -- Kouzes and Posner

Daily Tech Digest - May 14, 2021

Thoughts on Cloud Security

With good security professionals in high demand, companies are better off investing in their security professionals that show an interest in “cloud”; in order to take their security organization to the next level. Solid training and support, will enable them to better collaborate with development teams and significantly raise the “security” bar of their cloud environment. There are plenty of free resources available today, such as cloud security standards and open source solutions, that can be leveraged. The Center for Internet Security (CIS) controls and/or AWS’ Well-Architected Framework are great resources to help get started. As a reformed cloud security professional, I can say that embracing the cloud takes a shift in mindset. In general, security teams need to stop saying “no” and getting in the way of innovation. Instead, they need to be able to provide development teams the access they need — when they need it, and put guardrails in place to ensure security. To be successful, it is key to do this in a way that it does not have a significant impact in the development experience. 


85% of Data Breaches Involve Human Interaction: Verizon DBIR

"Credentials are the skeleton key," Bassett says. Most know stolen credentials are a problem, but what they may not think about is how they spread across attack patterns and enable the start of many different types of data breaches, from phishing campaigns, to stealing the contents of a target mailbox, to a ransomware campaign in which an attacker encrypts then steals data. The trend toward simplicity is evident in the continued increase of business email compromise (BEC), which followed phishing as the second most common form of social engineering, reflecting a 15x spike in "misrepresentation," a type of integrity breach. BEC doubled last year and again this year. Of the 58% of BEC attacks that successfully stole money, the median loss was $30,000, with 95% of BECs costing between $250 and $984,855, researchers learned. Of the breaches analyzed, 85% had a human element. This is a broad term that encompasses any attack that involves a social action: phishing, BEC, lost or stolen credentials, using insecure credentials, human error, misuse, and even malware that has to be clicked then downloaded.


Hybrid working: creating a sustainable model

The evolution of thinking around the workplace we’ve seen in such a short space of time is quite something. Over the course of the last year, business mindsets have shifted from complete allegiance to the physical office, to fully embracing remote working to survive, to a realisation that a hybrid working model may well be the best way for businesses to thrive. Now, as we begin to move out of the pandemic, IT and business leaders should be considering what their workplace strategy looks like in the long term. What can we learn from the last 12 months? What are the tools, technologies and processes we should keep in place? How do we facilitate a reimagined office space? How do we empower employees to be productive and happy wherever they are? There’s no doubt that hybrid working opens up huge opportunity for businesses, from creating a flexible working environment that appeals to a broad range of talent to enabling more efficient ways of working and a healthier work-life balance. But how do we create a hybrid model that is sustainable in the long term?


The Global Artificial Intelligence Race and Strategic Balance

Countries are under pressure to protect their citizens and even political stability in the face of possible malicious/biased uses of AI and Big Data. Because 5G networks are the future backbone of our increasingly digitised economies and societies, ensuring its security and resilience is essential. Even at current capability levels, AI can be used in the cyber domain to augment attacks on cyberinfrastructure. There is no such thing as perfect security, only varying levels of insecurity. These ‘smart’ technologies rely on bidirectional wireless links to communicate with devices and global services, which gives a larger ‘attack surface’ that cyber threats target. Thus, 5G networks may lead to politically divided and potentially noninteroperable technology spheres of influence, where one sphere would be led by the US and another by China, with some others in between (for example the EU, South Korea and Japan).All of these concerns are most significant in the context of authoritarian states but may also undermine the ability of democracies to sustain truthful public debates. For example, ‘deepfake’ algorithms can create fake images and videos that cannot easily be distinguished from authentic ones by humans. It is threatening to global security if deepfake methods are employed to promulgate misinformation.


5 developer tools for detecting and fixing security vulnerabilities

Dependabot - now a native Github solution - has a simple straightforward workflow: automatically open Pull Requests for new dependency versions, and alert on vulnerable dependencies. Dependabot will also clearly differentiate between security-related PR and normal dependency upgrades by tagging [Security] in the title and label, along with including a changelog of the vulnerabilities fixed. ... Similar to Dependabot, Renovate is a GitHub or CLI app that monitors your dependencies and opens Pull Requests when new ones are available. While it supports fewer languages than Dependabot, the main advantage of Renovate is that it's extremely configurable. Ever wished you could write "schedule": "on the first day of the week" in your configs!? Well, Renovate allows you to do that! It also provides fine-grained control of auto-merging dependencies based on rules set in the config. ... Synk is a new one for me, but I really like that it's a product built with developers in mind, regardless of their previous experience with security. While Snyk is a paid product for business+, their free tier covers open-source, personal projects, and small teams, making it a great resource for personal projects and learning, even if you don't have the opportunity to use it on the job!


Adding Security to Testing to Enable Continuous Security Testing

Security testing is a variant of software testing which ensures that the system and applications in an organization are free from any loopholes that may cause a big loss, Thalayasingam said. Security testing of any system is about finding all possible loopholes and weaknesses of the system which might result in a loss of information at the hands of the employees or outsiders of the organization. To kick off security testing, security experts should train quality engineers about security and how to do manual security testing. Next, quality engineers can work with security experts to narrow down the tests for security testing and add value to existing test cases. This will lead to executing the security tests in sprint level activities, automating them, and making them part of continuous integration. Quality engineers should add the security checks to their test process for each story, Thalayasingam suggested. This would help to find the obvious security vulnerabilities and a very early stay. The right guidance and training will help quality engineers to gain the security testing mindset.


Building AI Leadership Brain Trust Is A Business Imperative: Are You Ready?

There are sufficient markers painting this stark prediction if one chooses to dig deeper. Did you know that over half of technology executives in the 2019 Gartner CIO Survey say they intend to employ AI before the end of 2020, up from 14% today? Board directors and CEO have to accelerate their investments in AI, and ensure they are managing the journey wisely with the right AI leadership skills in place and Machine Learning toolkits required to advance AI with sustainability enablements to modernize your business.. In a recent report by NewVantage Partners, 75% of companies cited fear of disruption from data-driven digital competitors as the top reason they’re investing. There are many questions that board directors and CEOs must ask in the face of any large investment consideration, and AI is not inexpensive. On average an AI project can range from as low as $30K to $1 million plus for a MVP, depending on the complexity of the data set, use case being solved to build a baseline AI model to predict an accurate outcome.


Maximizing a hybrid cloud approach with colocation

Companies are increasingly deploying a hybrid cloud approach to balance the benefits and challenges presented by both the public and private cloud. With the hybrid cloud, both types of cloud environments are integrated, allowing data to move seamlessly between platforms. This hybrid architecture can be designed as a bifurcated system in which the private cloud hosts a company’s sensitive data and mission critical components, and the public cloud hosts the rest. With this type of architecture, the data and applications live permanently in their assigned cloud environment, but the two systems are able to communicate seamlessly. Another option – the cloud bursting model – houses all of a company’s information in the private cloud, but when spikes in demand occur the public cloud provides supplementary capacity. Both hybrid approaches give companies greater control over and access to their IT environments and the ability to implement more stringent security protocols on the private cloud portion of their deployment. In addition, a hybrid approach gives organizations flexibility to build a solution that meets their current needs, but that can also evolve as their needs change.


Fake Android, iOS apps promise lucrative investments while stealing your money

The operators have created dedicated websites linked to each individual app, tailored to appear as the impersonated organizations in an effort to improve the apparent legitimacy of the software -- and the likelihood of a scam being successful. Sophos' investigation into the apps began with a report of a single malicious app masquerading as a trading company based in Asia, Goldenway Group. The victim, in this case, was targeted through social media and a dating website and lured to download the fake app. Rather than relying on mass spam emails or phishing, attackers may now also take a more personal approach and try to forge a relationship with their victim, such as by pretending to be a friend or a potential love match. Once trust is established, they will then offer some form of time-sensitive financial opportunity and may also promise guaranteed returns and excellent profits. However, once a victim downloads a malicious app or visits a fake website and provides their details, they are lured into opening an account or cryptocurrency wallet and transferring funds. 


When AI Becomes the Hacker

The core question Schneier asks is this: What if artificial intelligence systems could hack social, economic, and political systems at the computer scale, speed, and range such that humans couldn't detect it in time and suffered the consequences? It's where AIs evolve into "the creative process of finding hacks." "They're already doing that in software, finding vulnerabilities in computer code. They're not that good at it, but eventually they will get better [while] humans stay the same" in their vulnerability discovery capabilities, he says. In less than a decade from now, Schneier predicts, AIs will be able to "beat" humans in capture-the-flag hacking contests, pointing to the DEFCON contest in 2016 when an AI-only team called Mayhem came in dead last against all-human teams. That's because AI technology will evolve and surpass human capability. Schneier says it's not so much AIs "breaking into" systems, but AIs creating their own solutions. "AI comes up with a hack and a vulnerability, and then humans look at it and say, 'That's good,'" and use it as a way to make money, like with hedge funds in the financial sector, he says.



Quote for the day:

"Effective team leaders realize they neither know all the answers, nor can they succeed without the other members of the team." -- Katzenbach & Smith