Daily Tech Digest - May 20, 2021

A new era of DevOps, powered by machine learning

While programming languages have evolved tremendously, at their core they all still have one major thing in common: having a computer accomplish a goal in the most efficient and error-free way possible. Modern languages have made development easier in many ways, but not a lot has changed in how we actually inspect the individual lines of code to make them error free. And even less has been done to keep your when it comes to improving code quality that improves performance and reduces operational cost. Where build and release schedules once slowed down the time it took developers to ship new features, the cloud has turbo charged this process by providing a step function increase in speed to build, test, and deploy code. New features are now delivered in hours (instead of months or years) and are in the hands of end users as soon as they are ready. Much of this is made possible through a new paradigm in how IT and software development teams collaboratively interact and build best practices: DevOps. Although DevOps technology has evolved dramatically over the last 5 years, it is still challenging. 


Productizing Machine Learning Models

Typically, there are three distinct but interconnected steps towards productizing an existing model: Serving the models; Writing the application’s business logic and serving it behind an API; and Building the user interface that interacts with the above APIs. Today, the first two steps require a combination of DevOps and back-end engineering skills (e.g. “Dockerizing” code, running a Kubernetes cluster if needed, standing up web services…). The last step—building out an interface with which end users can actually interact—requires front-end engineering skills. The range of skills necessary means that feedback loops are almost impossible to establish and that it takes too much time to get machine learning into usable products. Our team experienced this pain first-hand as data scientists and engineers; so, we built BaseTen. ... Oftentimes, serving models requires more than just calling it as an API. For instance, there may be pre- and/or post-processing steps, or business logic may need to be executed after the model is called. To do this, users can write Python code in BaseTen and it will be wrapped in an API and served—no need to worry about Kubernetes, Docker, and Flask. 


The timeline for quantum computing is getting shorter

Financial traders rely heavily on computer financial simulations for making buying and selling decisions. Specifically, “Monte Carlo” simulations are used to assess risk and simulate prices for a wide range of financial instruments. These simulations also can be used in corporate finance and for portfolio management. But in a digital world where other industries routinely leverage real-time data, financial traders are working with the digital equivalent of the Pony Express. That’s because Monte Carlo simulations involve such an insanely large number of complex calculations that they consume more time and computational resources than a 14-team, two-quarterback online fantasy football league with Superflex position. Consequently, financial calculations using Monte Carlo methods typically are made once a day. While that might be fine in the relatively tranquil bond market, traders trying to navigate more volatile markets are at a disadvantage because they must rely on old data. If only there were a way to accelerate Monte Carlo simulations for the benefit of our lamentably ladened financial traders! 


Pandemic tech use heightens consumer privacy fears

With user data the lifeblood of online platforms and digital brands, Marx said there were clear lessons for tech companies to learn in the post-pandemic world. Looking ahead, many study respondents agreed they would prefer to engage with brands that made it easier for them to control their data, up on previous years. Others called out “creepy” behaviour such as personalised offers or adverts that stalk people around the internet based on their browsing habits, and many also felt they wanted to see more evidence of appropriate data governance. Those organisations that can successfully adapt to meet these expectations might find they have a competitive advantage in years to come, suggested Marx. And consumers already appear to be sending them a message that the issue needs to be taken seriously, with over a third of respondents now rejecting website cookies or unsubscribing from mailing lists, and just under a third switching on incognito web browsing. Notably, in South Korea, many respondents said that having multiple online personas for different services was a good way to manage their privacy, raising concerns about data accuracy and the quality of insights that can be derived from it.


Why great leaders always make time for their people

When people can’t find you, they aren’t getting the information they need to do their job well. They waste time just trying to get your time. They may worry that, when they do find you, because you’re so busy, you’ll be brittle or angry. The whole organization may even be working around the assumption that you have no bandwidth. The sad truth, however, is that when you are unavailable, it’s also you who is not getting the message. You’re not picking up vital information, feedback, and early warning signs. You’re not hearing the diverse perspectives and eccentric ideas that only manifest in unpredictable, uncontrolled, or unscheduled situations—so, exactly those times you don’t have time for. And you’re not participating in the relaxed, social interactions that build connection and cohesion in your organization. So, though you may be busy doing lots of important stuff, your finger is off the pulse. But imagine being a leader who does have time, and how this freeing up of resources changes a leader’s influence on everyone below them. Great leaders know that being available actually saves time. A leader who has time would not use “busy” as an excuse. Indeed, you would take responsibility for time.


The road to successful change is lined with trade-offs

Leaders should borrow an important concept from the project management world: Go slow to go fast. There is often a rush to dive in at the beginning of a project, to start getting things done quickly and to feel a sense of accomplishment. This desire backfires when stakeholders are overlooked, plans are not validated, and critical conversations are ignored. Instead, project managers are advised to go slow — to do the work needed up front to develop momentum and gain speed later in the project. The same idea helps reframe notions about how to lead organizational change successfully. Instead of doing the conceptual work quickly and alone, leaders must slow down the initial planning stages, resist the temptation and endorphin rush of being a “heroic” leader solving the problem, and engage people in frank conversations about the trade-offs involved in change. This does not have to take long — even just a few days or weeks. The key is to build the capacity to think together and to get underlying assumptions out in the open. Leaders must do more than just get the conversation started. They also need to keep it going, often in the face of significant challenges.


With smart canvas, Google looks to better connect Workspace apps

The smart chips also connect to Google Drive and Calendar for files and meetings, respectively. And while the focus of the smart canvas capabilities is currently around Workspace apps, Google said that it plans to open the APIs for third-party platforms to integrate, too. “Google didn’t reinvent Docs, Sheets and Slides: They made it easier to meet while using them — and to integrate other elements into the Smart Canvas,” said Wayne Kurtzman, a research director at IDC. “Google seemingly focused on creating a single pane of glass to make engaging over work easier - without reinventing the proverbial wheel.” The moves announced this week are part of Google’s drive to integrate its various apps more tightly; the company rebranded G Suite to Workspace last year. “The idea of documents, spreadsheets and presentations as separate applications increasingly feels like an archaic concept that makes much less sense in today’s cloud-based environment, and this complexity gets in the way of getting things done,” said Angela Ashenden, a principal analyst at CCS Insight.


Graph databases to map AI in massive exercise in meta-understanding

"It is one of the biggest trends that we're seeing today in AI," Den Hamer said. "Because of this growing pervasiveness of this fundamental role of graph, we see that this will lead to composite AI, which is about the notion that graphs provide a common ground for the culmination, or if you like the composition of notable existing and new AI techniques together, they'll go well beyond the current generation of fully data-driven machine learning." Roughly speaking, graph databases work by storing a thing in a node – say, a person or a company – and then describing its relationship to other nodes using an edge, to which a variety of parameters can be attached. ... Meanwhile, graph databases often come in handy for data scientists, data engineers and subject matter experts trying to quickly understand how the data is structured, using graph visualisation techniques to start "identifying the likely most relevant features and input variables that are needed for the prediction or the categorisation that they're working on," he added.


Data Sharing Is a Business Necessity to Accelerate Digital Business

Gartner predicts that by 2023, organizations that promote data sharing will outperform their peers on most business value metrics. Yet, at the same time, Gartner predicts that through 2022, less than 5% of data-sharing programs will correctly identify trusted data and locate trusted data sources. “There should be more collaborative data sharing unless there is a vetted reason not to, as not sharing data frequently can hamper business outcomes and be detrimental,” says Clougherty Jones. Many organizations inhibit access to data, preserve data silos and discourage data sharing. This undermines the efforts to maximize business and social value from data and analytics — at a time when COVID-19 is driving demand for data and analytics to unprecedented levels. The traditional “don’t share data unless” mindset should be replaced with “must share data unless.” By recasting data sharing as a business necessity, data and analytics leaders will have access to the right data at the right time, enabling more robust data and analytics strategies that deliver business benefit and achieve digital transformation.


How AI could steal your data by ‘lip-reading’ your keystrokes

Today’s CV systems can make incredibly robust inferences with very small amounts of data. For example, researchers have demonstrated the ability for computers to authenticate users with nothing but AI-based typing biometrics and psychologists have developed automated stress detection systems using keystroke analysis. Researchers are even training AI to mimic human typing so we can develop better tools to help us with spelling, grammar, and other communication techniques. The long and short of it is, we’re teaching AI systems to make inferences from our finger movements that most humans couldn’t. It’s not much of a stretch to imagine the existence of a system capable of analyzing finger movement and interpreting it as text in much the same way lip-readers convert mouth movement into words. We haven’t seen an AI product like this yet, but that doesn’t mean it’s not already out there. So what’s the worst that could happen? Not too long ago, before the internet was ubiquitous, “shoulder surfing” was among the biggest threats faced by people for whom computer security is a big deal. Basically, the easiest way to steal someone’s password is to watch them type it.



Quote for the day:

"Distinguished leaders impress, inspire and invest in other leaders." -- Anyaele Sam Chiyson

Daily Tech Digest - May 19, 2021

A New Brain Implant Turns Thoughts Into Text With 90 Percent Accuracy

When asked to copy a given sentence, T5 was able to “mindtext” at about 90 characters per minute (roughly 45 words by one estimate), “the highest typing rate that has yet been reported for any type of BCI,” the team wrote, and a twofold improvement over previous setups. His freestyle typing—answering questions—overall matched in performance, and met the average speed of thumb texting of his age group. “Willett and co-workers’ study begins to deliver on the promise of BCI technologies,” said Rajeswaran and Orsborn, not just for mind-texting, but also what comes next. The idea of tapping into machine learning algorithms is smart, yes, because the field is rapidly improving—and illustrating another solid link between neuroscience and AI. But perhaps more importantly, an algorithm’s performance relies on good data. Here, the team found that the time difference between writing letters, something rather complex, is what made the algorithm perform so well. In other words, for future BCIs, “it might be advantageous to decode complex behaviors rather than simple ones, particularly for classification tasks.”


The question of QR code security

These codes can invoke various actions on a smartphone device. Here lies the threat. While a QR code may appear as though it is designed to help us sign in to a Wi-Fi network or be part of an innocent marketing campaign, the intent of it may be entirely different, with threat actors architecting and deploying malicious codes in a variety of ways. They can be used to direct the user to a malicious URL for the purpose of phishing; force a call, thereby exposing the end user’s phone number to a scammer or a potentially expensive call centre; send a payment within seconds; obtain a user’s location; or draft an email or text and populate the recipient and subject lines. Additionally, they may introduce a compromised network on a device’s preferred network list and include a credential that enables the device to automatically connect to that network. Once connected, an attacker could launch further ‘Man-in-the-Middle’ attacks. Given the variety and seriousness of these potential threats, some key statistics released by MobileIron in September 2020 provide cause for alarm. ... At the same time, however, 71% of respondents said they could not “distinguish between a legitimate and malicious QR code”.


Security doesn’t always require immediacy

Understanding the importance of long-term security investment is one thing. Putting this into practice presents a new challenge entirely. Organizations can look to assign this task internally, but ultimately, IT personnel need to focus on business goals. At the same time, it is not always reasonable to expect an IT department to keep up with developments on topics ranging from software security to cryptography to hardware architecture. By relying on vendors for security agility, organizations can outsource the technologies required for long-term protection. Enterprise service management (ESM) is one such option that looks to ensure organizations are protected for a matter of years. As digital transformation accelerates, new technologies that enable the business tend to go out of date entirely too soon after deploying. Investing and enhancing the IT department to tackle the challenge can be one way forward, but ultimately businesses need their employees to focus on their own expertise. ESM enables the adoption of new technologies whilst allowing for organizations to move at their own pace.

Allyship: Stepping up for a more inclusive IT

Work that involves self-reflection is challenging, and when it comes to addressing diversity issues and one’s position within them, some IT leaders may be unsure about how to effectively approach these sensitive topics. But being an ally can set an example for the entire company. Allies in leadership positions can demonstrate to employees that conversations about diversity, equity, and inclusion (DEI) are encouraged, and that employees from traditionally marginalized groups have a safe environment to bring up concerns they may have about their own experiences in the workplace. “I think it’s about setting an example from the top, being willing to engage in those uncomfortable conversations. A really effective ally is willing to put some skin in the game, to put some of their privilege on the line, on behalf of someone who doesn’t have that privilege,” says Malcom Glenn, director of public affairs at Better.com and former head of global policy for accessibility and underserved communities at Uber. Conversations around racism, sexism, and bigotry, both within and outside of the workplace, can be very difficult. 


Ransomware attacks are not a matter of if, but when

As the CISO role evolves and more data is stored in clouds, DeFiore said her priorities right now are moving "back to basics" and knowing where the airline's data is, applying patches and working from a stance of least privilege. Also important is reducing the attack surface, she said, and "making sure we're only publishing things to the internet that need to be there and segmenting and making sure there's no opportunities for lateral movement" inside the network. Twitter's mission is to protect public conversations, and Sethi said that requires being able to recover quickly. She also said she thinks there will be an increase in the number of security vendors suffering breaches, "which is why I say think about who you partner with." Salem said he was impressed with "how well CISOs responded during the pandemic," and moved from a world in which they had a lot of control to very little--almost overnight. The lesson the security community has learned from that experience is to be agile, he said. Looking ahead, the CISO needs to continue becoming integrated into the day-to-day operations of the business so they can be better prepared, he said.


Data Science Focus Areas for the Future

Forecasting based on historic trends has long been an essential part of conducting business. Whether using seasonal averages or industry knowledge, predicting the future was always a use-case with numerous applications. Data Scientists have a good mix of talents that line them up with the request to fit forecasting models, but they have not always had the educational background to effectively communicate their results in terms of economics and finance. ... As more fitted models start making their way to production, a knowledge gap in deploying and maintaining models emerges. A 100% accurate model that only lives on your machine is close to 0% useful. The ETL (extract, transform and load) and packaging of ML capabilities with requirements is a grey area for many current data scientists, and not always covered in training programs. Auto-ML capabilities are increasing the frequency that a useful model, designed to be consumed, will be thought of for production. In my opinion, this is an area of focus that will become the most lucrative in terms of jobs and compensation.


Cloud banking: More than just a CIO conversation

Data security concerns are top of mind for bank leaders. An important part of understanding the cloud is considering how an enterprise’s current infrastructure and capabilities may be limiting its ability to detect and address new risks and vulnerabilities—and how cloud technology can help. Security is different in the cloud because of the tools that are native to each cloud provider’s environment and the fact that cloud providers typically take responsibility for the security of the lower-level infrastructure layers. The shared security responsibility between cloud providers and the clients they host changes how organizations should anticipate and prepare for security risks. ... Cloud computing can help banks and financial services firms meet ever-evolving regulatory reporting requirements (e.g., Comprehensive Capital Analysis and Review, Solvency II) in multiple operating jurisdictions—a critically important capability in an industry where cross-border transactions are the norm. Cloud solutions can also help banks conduct intraday liquidity and risk calculations, and mine trade surveillance data to detect anti-money laundering and other fraud issues. 


5G smartphones have arrived. But for many, the reason to upgrade is still missing

Globally, across all the markets surveyed by Ericsson where 5G commercial networks are available, an average 4% of consumers own a 5G smartphone and have a 5G subscription. In the UK, an overwhelming 97% of respondents are yet to embrace next-generation connectivity. This is partly due to the lack of clarity in the marketing of 5G, which has confused customers as to what the technology is and what it can offer. Heavy tech jargon and misinformation campaigns have in some cases even put users off entirely from planning to upgrade. In the UK, for example, the number of consumers intending to upgrade to 5G next year stands at 25% – down from 27% in 2019. For the few who have switched to 5G networks, however, the experience overall seems positive, with better levels of satisfaction recorded compared to users connected with 4G LTE. Perhaps as a reflection of 5G's capabilities, Wi-Fi usage among those who have upgraded is reducing. A quarter of respondents, said Ericsson, have either decreased or stopped using Wi-Fi after switching to 5G.


Turning an uncertain 2021 into a year of opportunity

IT leaders putting new cloud and technology strategies in place to help improve their resiliency and availability must ensure that, once implemented, they don’t just sit back and hope that it works. Organisations need to know that the improvements to their business continuity plans (BCPs) will stand the test of disruption when further unexpected, and even significant planned-for, events occur further down the line. Stress-testing exercises are the best way of identifying gaps and faults in your BCP, before it is too late and remediating any problems becomes reactive rather than proactive. Organisations can do this by identifying which threats pose the biggest risk to the business – this is a vital step as every company and industry is different. Once these have been identified, each scenario can be placed in order of priority, by gauging exactly how much impact they could have on the business and how complex the plan needs to be to respond. When conducting these exercises, each team member should also be assigned a clear role. This might be as an active player in the BCP test, or it may be as an external party, for example an evaluator or observer, who can help spot any flaws and play a vital role in measuring its success.


NVIDIA Announces AI Training Dataset Generator DatasetGAN

A generative adversarial network (GAN) is a system that is composed of two deep-learning models: a generator which learns to create realistic data and a discriminator which learns to distinguish between real data and the generator's output. After training, often the generator is used alone, to simply produce data. NVIDIA has used GANs for several applications, including its Maxine platform for reducing video-conference bandwidth. In 2019, NVIDIA developed a GAN called StyleGAN that can produce photorealistic images of human faces and is used in the popular website This Person Does Not Exist. Last year, NVIDIA developed a variation of StyleGAN that can take as input the desired camera, texture, background, and other data, to produce customizable renderings of an image. Although GANs can produce an infinite number of unique high-quality images, most CV training algorithms also require that images be annotated with information about the objects in the image. ImageNet, one of the most popular CV datasets, famously employed tens of thousands of workers to label images using Amazon's Mechanical Turk.



Quote for the day:

"A leader is one who sees more than others see and who sees farther than others see and who sees before others see." -- Leroy Eimes

Daily Tech Digest - May 18, 2021

How penetration testing can promote a false sense of security

Savvy cybercriminals, not wanting to waste time nor money, look for the simplest way to achieve their goal. "Attackers have access to numerous tools, techniques, and even services that can help find the unknown portion of an organization's attack surface," suggested Gurzeev. "Similar to the 13th century French attackers of Château Gaillard, but with the appeal of lower casualties and lower cost with a greater likelihood of success, pragmatic attackers seek out an organization's externally accessible attack surface." As mentioned earlier, completely protecting an organization's cyberattack surface is nearly impossible—partly due to attack surfaces being dynamic and partly due to how fast software and hardware change. "Conventional tools are plagued by something I mentioned at the start: assumptions, habits, and biases," explained Gurzeev. "These tools all focus only where they are pointed, leaving organizations with unaddressed blind spots that lead to breaches." By tools, Gurzeev is referring to penetration testing: "Penetration testing is a series of activities undertaken to identify and exploit security vulnerabilities. ..."


Microservices Architecture: Breaking the Monolith

The first thing to know: the less communication, the better relations. It’s very easy and very tempting to create lots of services, that are very easy to test from a singular standpoint, but as a whole, your system will get really complicated and tangled. It makes things difficult to track should a problem arise because you’ve got this enormous entanglement, and it may be hard to identify where the root of the problem lies. Another important consideration is to enter events into the queue. Many times we have been told that we cannot break these into separate services because this thing has to be perfectly synchronized for events that happen in the next steps. Usually, that’s not true. Thanks to the queueing system and topic messaging systems that exist today, there are lots of ways to break synchronization. It’s true that you are adding an extra layer that could bring some latency problems but in the end, being able to break all the synchronicity will probably end up improving your experience. ... It is very easy to keep creating microservices on the cloud, but if you don’t have a clear plan that also makes it very easy to lose track of your project’s budget. 


Chip shortage will hit IT-hardware buyers for months to years

Cisco CEO Chuck Robbins told the BBC in April: “We think we’ve got another six months to get through the short term. The providers are building out more capacity. And that’ll get better and better over the next 12 to 18 months.” The problem could last even longer, others say. “The supply chain has never been so constrained in Arista history,” Arista CEO Jayshree Ullal told analysts at the company’s recent financial briefing. “To put this in perspective, we now have to plan for many components with 52-week lead time.” “We have products with extremely large lead times that we plan ahead for. And I would be remiss, if I didn’t say we while we have some great partners, that the semiconductor supply chain is still constrained,” Ullal said. “Our team have taken some very important steps, to build out our inventory for some of these long-lead-time components, but we could use a lot more parts than we still have.” In its first quarter earnings call Juniper Networks CFO, Ken Miller, told analysts that ongoing supply constraints are likely to continue for a year or more.


Considering the ethics of tech for a more responsible future

“At All Tech Is Human, we recently released a report on improving social media, and after interviewing a diverse sample of 42 individuals from across civil government, government and industry, we realised that we don’t have an agreed future as to where social media should be headed. “This showed that we need more input from diverse groups to determine a better forward action.” The All Tech Is Human founder went on to identify data extraction as the biggest issue regarding the power of social media, due to most outlets being based on a model of obtaining user data, which benefits advertising over apps. “The fact that social media practices are more geared towards advertisers than communication creates most of the problems we see,” Polgar continued. “This is where regulations are important. These platforms are trying to maximise their profitability inside the parameters of legality.” According to Polgar, while tech companies need to consider the need to crack down on misinformation around topics such as Covid-19, the other side of the coin manifests itself in the argument that social media outlets don’t have the moral authority to remove these posts from the platform.


Customer service is not customer experience (and vice versa)

Because customer experience is strategic, not tactical, you need to know where the value is coming from, and where you’re throwing good money after bad. First, identify your valuable customers, advises Strategex’s Nash, then go deeper to analyze why they are valuable. Are they spending money broadly or deeply, or both? “We have years and years of data to prove the 80/20 rule — that 80 percent of your revenue comes from 20 percent of your customers.” More than that, she adds, it’s not uncommon for the top 5 percent of customers to produce half the revenue. “Words get people’s attention; data causes action,” she notes. This analysis matters because the resources spent servicing unprofitable customers can be a distraction from work that should be done to create a great experience for those who matter most to your business. “Once you know who your top customers are, you can create a customer experience for them, with the appropriate expectations on their side and effort on the employee side,” Nash says. And you can set different experience and service expectations for less-valuable customers. This can be as simple as offering clearly branded tiers of service or membership.


Interview With Srikanth Phalgun Pyaraka, Chartered Data Scientist

While working with business stakeholders to integrate data and analytics into business models, we have faced multiple challenges. One of the significant challenges that we face in most organisations is Business Intelligence reporting to the next level of enabling predictive or prescriptive analytics decision making. This is what we call an analytic chasm. Organisations should tend to move from the analytics chasm with the help of change in the mindset of decision-makers. The main focus should be on leveraging technology to competitive power differentiation and not competitive parity. Greater emphasis should be to build infrastructure and data analytics environments to support data-driven business initiatives. ... Chartered Data scientist designation is the highest distinction in the data science profession. The exam looks for skills including computer programming, including R and Python; Mathematics, especially statistics and probability; Analytical Methods such as EDA, ML algorithms; Advance Analytics including deep learning, computer vision, NLP; and Business Analytics at international standards.


10 Emerging Cybersecurity Trends To Watch In 2021

Extended detection and response (XDR) centralizes security data by combining security information and event management (SIEM); security orchestration, automation, and response (SOAR), network traffic analysis (NTA), and endpoint detection and response (EDR). Obtaining visibility across networks, cloud and endpoint and correlating threat intelligence across security products boosts detection and response. An XDR system must have centralized incident response capability that can change the state of individual security products as part of the remediation process, according to research firm Gartner. The primary goal of an XDR platform is to increase detection accuracy by corelating threat intelligence and signals across multiple security offerings, and improving security operations efficiency and productivity, Gartner said. XDR offerings will appeal to pragmatic midsize enterprise buyers that do not have the resources and skills to integrate a portfolio of best-of-breed security products, according to Gartner. Advanced XDR vendors are focusing up the stack by integrating with identity, data protection, cloud access security brokers, and the secure access service edge to get closer to the business value of the incident.


How the API economy is powering digital transformation

“APIs allow businesses to more efficiently unify and structure data from across multiple communication platforms and leverage that data to build more productive workflows, bring products and features to market faster, and create modern user experiences that drive adoption and retention,” Polyakov told VentureBeat. “APIs allow businesses to achieve all of this without having to commit large amounts of time and resources, allowing product and engineering teams to focus on other critical issues and business goals.” However, Polyakov notes that many of the best APIs are those that handle and transfer lots of rich data, meaning “proper security protocols and compliance certifications” are vital. “Without proper assessments or an understanding of good design for security, businesses can accidentally expose sensitive information or unintentionally open themselves up to malicious inputs, compliance violations, and more,” Polyakov said. ... “The API economy has empowered companies to be more successful — whether it’s through leveraging third-party APIs to improve business processes, attracting and retaining customers, or producing an API as a product,” Bansal told VentureBeat.


How 2020 Shaped Transformation for Public Sector CIOs

Digital citizen services saw increased demand with people needing 24/7 access to critical services and information. What was once considered more of a “nice-to-have” became an absolute necessity. With some normalcy returning, local governments must maintain this momentum toward modernization with digital citizen services at the forefront of their digital transformation plans. Remote work in the public sector increased efficiency, cost-savings and led to more empowered and engaged government employees. A survey found remote government employees 16% more engaged, 19% more satisfied and 11% less likely to leave their agencies than non-remote workers. Much like the private sector, when deciding on what a post-pandemic workplace looks like, local governments need to consider a hybrid environment and continue providing infrastructure and support for remote work. Advanced cybersecurity is far from a new priority for local governments. But the rapid digitization of the public sector over the past year -- increased digital services and data, mobilization of the workforce, cloud migration, and more -- made cybersecurity an even bigger focus. 


Hiring remote software developers: How to spot the cheaters

There is a subtle balancing act in providing an assessment platform that is efficient at sensing fraud, but at the same time provides a good experience for honest test takers. The most successful assessment platforms usually apply a two-pronged approach by mixing and matching fraud mitigation with fraud detection. Signing the code of honor is an example of graceful and efficient mitigation tactics, rooted in academic research (Ariely, 2007) and confirmed by years of practice. It has been scientifically established that being reminded of moral issues makes an individual less prone to cheat. It is always wise to protect the platform’s evaluation content. Quality vendors limit the time and number of exposures of the same assessment content, actively monitor scores and pass rates to preempt task depletion and constantly crawl the internet to identify leaked tasks and solutions. Test randomization, a platform feature that enables automated on-the-spot test creation from a set of preconfigured equivalent tasks, is helpful in mitigating cheating, since it’s harder to game a system that is less predictable.



Quote for the day:

"If stupidity got us into this mess, then why can't it get us out?" -- Will Rogers

Daily Tech Digest - May 17, 2021

Paying it forward: why APIs are key to easing technical debt

Organisations must find a way to reduce their technical debt, by replacing tight couplings with a more flexible integration layer. As such, API strategies are becoming more important than ever. APIs create a loose coupling between applications, data, and devices, so organisations can make changes quickly without impacting their existing integrations or the functionality of digital services. It therefore becomes easier to accelerate innovation and deliver new products and services faster, without increasing the risk of business disruption or spiralling costs. One organisation putting this into practice is Allica Bank, a new, digital-only bank that exclusively caters to SMEs. Rather than build its offerings around one core platform as traditional banks do, Allica is built around a more flexible integration layer, underpinned by APIs. When it needs to expose the data from a certain application or system, it does so via an API, without the need to write any code to connect the systems in question. This makes for a much more agile operation, as new services can be switched in and out as needed. For Allica, this level of agility has been critical to its ability to meet its customers’ needs for urgent access to credit in 2020.


Machine Learning and the Coming Transformation of Finance

As financial firms get more comfortable with machine learning in their most advanced departments, they’ll start to adapt it in other areas to deal with the vast treasure trove of structured and unstructured data pouring into their data lakes. Whether that’s trying to give customers better answers when they call with questions, or quickly figuring out whether someone is qualified for a loan, machine learning will seep into every aspect of the financial enterprise. It will also revolutionize the areas where it’s already dominate, trading and fraud. None of this comes without risks though. Rule bases systems are at least easier to understand. People can inspect and interpret hand-coded rules but with machine learning the systems are more opaque and we don’t always know why a machine made the decision it made. Even worse, as governments take their first stabs at regulation, it’s clear from early drafts of bills in the EU, that regulators don’t fully understand how machine learning models work and they’ve drafted vaguely worded bills that will be open to interpretation and create additional compliance complexity.


Generating greater public value from data

Data ownership is a complex concept. If I have data about you in my database, who owns that data? Does it depend on what kind of data it is? For example, if I know you just bought a new boat, can I sell that information? What if I know you were just diagnosed with cancer? According to a 2018 survey, 90% of respondents believe it is unethical to share data about them without their consent, highlighting growing concerns surrounding data control and ownership. Bearing this in mind, and recognizing the importance of building citizen trust, some governments have begun to establish frameworks to give citizens greater control over their data. For instance, in January 2020, Indonesia’s government submitted a bill to parliament that would require explicit consent to distribute personal data such as name, nationality, religion, sexual orientation, or medical records. Violators could face up to seven years in jail for sharing citizen data without consent. Another governance approach is shown by the UK National Health Service (NHS). In the COVID-19 app of the UK NHS, the Department of Health and Social Care, NHS England, and NHS Improvement are the designated data controllers.


Cyber investigations, threat hunting and research: More art than science

There is a reason why this is a requirement to become one of the most successful. Security defenders need to be 100% perfect at protecting 100% of the countless entry points 100% of the time in order to prevent breaches, while on the other hand, hackers only need one exploit that works. While that adage is considerably oversimplified, the moral is true: Being a defender means keeping up with an impossible firehose of changing technologies, controls, and attacks. Not to mention, your advisories are not pieces of code – they are creative and motivated people. And let’s be honest, hacking is fun! When you are engaged in something fun, you likely have heightened motivation and creativity, so only those who approach the challenge of defense work with the same level of play and creativity as hackers will rise to the top of their team, company, and industry. The reflections of this “playful” approach can be seen in quotes from some of the most famous contemporary artists of today. “When someone sees one of my paintings, I want them to really feel the place that I’m depicting. And so, my desire is that they’re going to want to travel into that painting and become part of it.” – James Colema


Appian Launches New Low-Code Automation Platform For Enterprises

Interestingly, the launch of its new low-code automation comes when enterprises are looking for quick solutions to deploy AI-powered applications and smooth workflow automation across departments with limited resources and agile processes. Today, low-code, no-code technology platforms have emerged as a go-to model for businesses. Several players, including Appian, Microsoft, Amazon, Pega, and ServiceNow are working on products and ideas to ease the burden for enterprises. In India, companies like Infosys, HCL Technologies and Tech Mahindra, alongside various startups, are also working on this technology. “This is the time for low-code automation platforms,” said Matt Calkins, Appian founder and CEO. “We have just started a new decade, but low-code is how applications are built in the future. It’s inevitable,” said Calkins. A cloud-based, no-code application development platform Quixy’s CEO Gautam Nimmagadda told AIM that no-code would allow more companies to participate in software development, allowing professional developers to focus on advanced and specialised areas.


How to Take AIOps from a Promising Concept to a Practical Reality

AIOps offers organizations the potential to improve IT team productivity and cost while fortifying overall business stability and resilience. The technology also supplies the ability to gain deep insights on customer experiences and journeys. "AIOps can bring predictive abilities to operations so organizations are able to adjust to changes," Velayudham said. "By automating the mundane work and uncovering insights from large datasets that aren’t possible to sift through manually, AIOps can increase IT team efficiency," he added. By taking a strategic and intelligent approach to IT automation, businesses can also accelerate their digital transformation efforts. "IT automation can also eliminate repetitive manual tasks, freeing up your IT team to address more strategic tasks, making the entire team more valuable to the business," Mirani said. The AIOps vendor field is growing rapidly. This fact should help ease AIOps adoption, but it's also creating some confusion for potential customers as they find themselves sorting through various tools and approaches. 


Kubernetes: 6 open source tools to put your cluster to the test

Kube-monkey is a version of Netflix’s famous (in IT circles, at least) Chaos Monkey, designed specifically to test Kubernetes clusters. Chaos Monkey essentially asks: “What happens to our application if this machine fails?” It does this by randomly terminating production VMs and containers. As a manifestation of the broader discipline of chaos engineering, the core idea behind the open source tool is to foster resilient, fault-tolerant applications by treating failure as a given in any environment. ... Kubernetes has lots of native security controls that require proper configuration and fine-tuning over time. The community commitment to the platform’s security has also led to the creation of various commercial and open source tools for further ensuring the security of your applications and environment. Kube-hunter is a good example: it’s an open source tool for pen-testing your cluster and its nodes. Basically, penetration testing is to security what chaos testing is to resiliency. By assuming that you have weaknesses that an attacker can exploit (because you almost certainly do), you more proactively build security into your systems. You’re attacking yourself to discover holes before someone else does.


We need to design distrust into AI systems to make them safer

The negatives are really linked to bias. That’s why I always talk about bias and trust interchangeably. Because if I’m overtrusting these systems and these systems are making decisions that have different outcomes for different groups of individuals—say, a medical diagnosis system has differences between women versus men—we’re now creating systems that augment the inequities we currently have. That’s a problem. And when you link it to things that are tied to health or transportation, both of which can lead to life-or-death situations, a bad decision can actually lead to something you can’t recover from. So we really have to fix it. The positives are that automated systems are better than people in general. I think they can be even better, but I personally would rather interact with an AI system in some situations than certain humans in other situations. Like, I know it has some issues, but give me the AI. Give me the robot. They have more data; they are more accurate. Especially if you have a novice person. It’s a better outcome. It just might be that the outcome isn’t equal.


Performance Testing of Microservices

You need a distinct strategy for testing microservices as they go behind a distinct architecture and have several integrations with other microservices within the individual organizations and from the outside world (third-party integrations). Moreover, these necessitate a huge amount of collaboration among various squads or teams developing independent microservices. Additionally, they are independent purpose services and are deployed separately as well as regularly. As we have seen benefits of microservices in brief, it also own complicated challenges to cater to. As manifold services are interrelating with each other with REST-based endpoints, the performance deprivation can bang a business to sink. For instance, an eCommerce app with 100ms shaved off on its shopping cart or product listings can straight influence the bottom line of order placement. Otherwise, for an event-driven product with frequent contact between customers, even a hindrance of a few milliseconds can annoy the client and could cause them to go somewhere else. Whatever the situation may be, reliability and performance is the significant element of software development, so businesses must spend the necessary effort time and into performance tests.


Agility Broke AppSec. Now It's Going to Fix It.

AppSec teams are charged with making sure software is safe. Yet, as the industry's productivity multiplied, AppSec experienced shortages in resources to cover basics like penetration testing and threat modeling. The AppSec community developed useful methodologies and tools — but outnumbered 100 to 1 by developers, AppSec simply cannot cover it all. Software security is a highly complex process built upon layers of time-consuming, detail-oriented tasks. To move forward, AppSec must develop its own approach to organize, prioritize, measure, and scale its activity. Agile approaches and tools emerged from recognizing the limitations of longstanding approaches to software development. However, AppSec's differences mean it can't simply copy software development. For example, bringing automated testing into CI/CD might overlook significant things. First, every asset delivered outside CI/CD will remain untested and require alternative AppSec processes, potentially leading to unmanaged risk and shadow assets. Second, when developers question the quality of a report, it creates friction between engineers and security, jeopardizing healthy cooperation.



Quote for the day:

“Make your team feel respected, empowered and genuinely excited about the company’s mission.” -- Tim Westergren

Daily Tech Digest - May 16, 2021

Scientist develops an image recognition algorithm that works 40% faster than analogs

Convolutional neural networks (CNNs), which include a sequence of convolutional layers, are widely used in computer vision. Each layer in a network has an input and an output. The digital description of the image goes to the input of the first layer and is converted into a different set of numbers at the output. The result goes to the input of the next layer and so on until the class label of the object in the image is predicted in the last layer. For example, this class can be a person, a cat, or a chair. For this, a CNN is trained on a set of images with a known class label. The greater the number and variability of the images of each class in the dataset are, the more accurate the trained network will be. ... The study's author, Professor Andrey Savchenko of the HSE Campus in Nizhny Novgorod, was able to speed up the work of a pre-trained convolutional neural network with arbitrary architecture, consisting of 90-780 layers in his experiments. The result was an increase in recognition speed of up to 40%, while controlling the loss in accuracy to no more than 0.5-1%. The scientist relied on statistical methods such as sequential analysis and multiple comparisons.


Comprehensive Guide To Dimensionality Reduction For Data Scientists

The approaches for Dimensionality Reduction can be roughly classified into two categories. The first one is to discard less-variance features. The second one is to transform all the features into a few high-variance features. We will have a few of the original features in the former approach that do not undergo any alterations. But in the later approach, we will not have any of the original features, rather, we will have a few mathematically transformed features. The former approach is straightforward. It measures the variance in each feature. It claims that a feature with minimal variance may not have any pattern in it. Therefore, it discards the features in the order of their variance from the lowest to the highest. Backward Feature Elimination, Forward Feature Construction, Low Variance Filter and Lasso Regression are the popular techniques that fall under this category. The later approach claims that even a less-important feature may have a small piece of valuable information. It does not agree with discarding features based on variance analysis.


How security reskilling and automation can mend the cybersecurity skills gap

To understand the high demand for cybersecurity skills, consider how much has changed in IT—especially in the last year. From a rapid increase in cloud migrations to a huge shift toward remote work, IT teams everywhere have been forced to adapt quickly to keep up with the changing needs of their organizations. However, the rapid expansion of technology and explosion of remote work has kept IT busy enough. They don’t have the capacity to adequately handle responsibilities ranging from regular security hygiene to the patching and forensics surrounding the latest zero-day threat. ... With the difficulty of recruiting, hiring, and onboarding new cybersecurity experts from a small talent pool, consider investing in retraining your workforce to organically grow needed cybersecurity skills. Besides avoiding a lengthy headhunting process, this also makes clear economic sense. According to the Harvard Business Review, it can cost six times as much to hire from the outside rather than build talent from within. In addition, focusing on retraining opens up career progression for your best employees—building their skills, morale, and loyalty to your organization.


The Flow System: Leadership for Solving Complex Problems

One of the most significant limitations in today’s leadership practices is the lack of development. Most leadership training is disguised as leader education. These training efforts also do not include time for emerging leaders to practice their newly learned leadership skills. Without practice and the freedom to fail during the developmental stages, it is nearly impossible for emerging leadership to master skill. Another problem with leadership development is that most programs deliver training to everyone the same way. Most leadership development programs were initially designed as “one-size-fits-all” training. In The Flow System, we make great efforts to design leadership and team development around the contextual setting. We view leadership as a collective construct, not an individual construct. We incorporate the team as the model of leadership, and individual team members as leaders using a shared leadership model. This collective becomes the organization’s leadership model, from the lower ranks up to the executive level. 


5 Practices To Give Great Code Review Feedback

The first thing to do is to have a very clear context about the PR. Sometimes we want to go fast; we think we already know what our colleague wanted to do, the best way to do it, and we just skim through the description. However, it is much better to take some time and read the title and description of the PR carefully, especially the latter because we could find all the assumptions that guided our colleague. We could find a more detailed description of the task and perhaps a good description of the main issue they faced when developing it. This could give us all the information we need to perform a constructive review, taking into consideration all the relevant aspects of it. ... When reviewing a piece of code, focus on the most important parts: the logic, the choices of data structure and algorithms, whether all the edge cases have been covered in the tests, etc. Many of the other syntax/formatting elements should be taken care of by a tool, such as a linter, a formatter, a spell checker. etc. There is no point in highlighting them in a comment. The same idea holds on how the documentation is written. There should be some conventions, and it is OK to tell the contributor if they are not following them.


Machine learning does not magically solve your problems

Looking at the neural network approach we see that some of the manual tasks are absorbed into the neural network. Specifically, feature engineering and selection are done internally by the neural network. On the flipside, we have to determine the network architecture (number of layers, interconnectedness, loss function, etc) and tune the hyperparamers of the network. In addition, many other tasks such as assessing the business problem still need to be done. As with TSfresh/Lasso, the neural network is an approach that works well in a specific situation, and is not a quick nor automated procedure. A good way to frame to change from regression to the neural network is that instead of solving the problem manually, we build a machine that solves the problem for us. Adding this layer of abstraction allows us to solve problems we never thought we could solve, but that still takes a lot of time and money to create. ... Machine learning has some magical and awe-inspiring applications, extending the range of applications we thought possible to be solved using a computer. However, the awesome potential of machine learning does not mean that it automatically solves our challenges.


Five Tips For Creating Design Systems

A product experience that delights is usually designed with persistent visuals and consistent interaction. Users want to feel comfortable knowing that no matter where they navigate, they won’t be surprised by what they find. Repetition, in the case of product design, is not boring, but welcome. Design systems create trust with users. Another benefit is the increased build velocity from design and engineering teams. As designers, we are tasked with solving problems. We want to create a simple understanding of how our users can accomplish tasks in a workflow. Of course, we are tempted at times to invent new patterns to solve design problems. We often forget, in the minutia of design iterations, that we’ve already solved a particular problem in a prior project or in another part of the current product. This inefficiency can lead to wasted time, especially if those existing patterns and components have not been documented. In a single-person design team, the negative effects may not be as visible, but one can imagine the exponential nature of a larger design team consistently duplicating existing work or creating new patterns that, ultimately, create an inconsistent user experience.


A Gentle Introduction to Multiple-Model Machine Learning

Typically, a single output value is predicted. Nevertheless, there are regression problems where multiple numeric values must be predicted for each input example. These problems are referred to as multiple-output regression problems. Models can be developed to predict all target values at once, although a multi-output regression problem is another example of a problem that can be naturally divided into subproblems. Like binary classification in the previous section, most techniques for regression predictive modeling were designed to predict a single value. Predicting multiple values can pose a problem and requires the modification of the technique. Some techniques cannot be reasonably modified for multiple values. One approach is to develop a separate regression model to predict each target value in a multi-output regression problem. Typically, the same algorithm type is used for each model. For example, a multi-output regression with three target values would involve fitting three models, one for each target.


Why is Business Intelligence (BI) important?

The term “data-driven decision-making” doesn’t fully encapsulate one of its important subtexts: People almost always mean fast decisions. This distinction matters because it’s one of the capabilities that modern BI tools and practices enable: Decision-making that keeps pace (or close enough to it) with the speed at which data is produced. “Data is now produced so fast and in such large volumes that it is impossible to analyze and use effectively when using traditional, manual methods such as spreadsheets, which are prone to human error,” says Darren Turner, head of BI at Air IT. “The advantage of BI is that it automatically analyzes data from various sources, all accurately presented in one easy-to-digest dashboard.” Sure, everyone talks about the importance of speed and agility across technology and business contexts. But that’s kind of the point: If you’re not doing it, your competitors almost certainly are. ... “In a marketplace where the volume of data is ever-increasing, the ability for it to be processed and translated into sound business decisions is essential for better understanding customer behavior and outperforming competitors.”


What Is NFT (Non Fungible Tokens)? What Does NFT Stand For?

The bulk of NFTs are stored on the Ethereum network.. Certain NFTs, which store additional information that allows them to function differently are also supported by the blockchain. Ethereum, like bitcoin and dogecoin, is a cryptocurrency, but the blockchain frequently accepts such non-fungible tokens (NFTs), which store additional information that enables them to function differently Person tokens that are part of the Ethereum network that have extra information are known as NFTs. The extra content is the most important feature, as it allows them to be displayed as art, music, video (and so on) in JPGs, MP3s, photographs, GIFs, and other formats. They can be bought and sold like any other medium of art because they have value – and their value is largely dictated by supply and demand, much like physical art. But that doesn’t suggest, in any way, that there is just one digital version of NFT art available to purchase. One can obviously replicate them, much like the art prints of originals are used, bought and sold, but they won’t be of the same value as the original one.



Quote for the day:

"It is not fair to ask of others what you are not willing to do yourself." -- Eleanor Roosevelt

Daily Tech Digest - May 15, 2021

Hybrid multiclouds promise easier upgrades, but threaten data risk

Lack of ongoing training and recertification. Such training helps to reduce the number and severity of hybrid cloud misconfigurations. As the leading cause of hybrid cloud breaches today, it’s surprising more CIOs aren’t defending against misconfigurations by paying for their teams to all get certified. Each public cloud platform provider has a thriving sub-industry of partners that automate configuration options and audits. Many can catch incorrect configurations by constantly scanning hybrid cloud configurations for errors and inconsistencies. Automating configuration checking is a start, but a CIO needs a team to keep these optimized scanning and audit tools current while overseeing them for accuracy. Automated checkers aren’t strong at validating unprotected endpoints, for example. Automation efforts often overlook key factors. It is necessary to address inconsistent, often incomplete controls and monitoring across legacy IT systems. That is accompanied by inconsistency in monitoring and securing public, private, and community cloud platforms. Lack of clarity on who owns what part of a multicloud configuration continues because IT and the line of the business debate who will pay for it.


Cybersecurity Oversight and Defense — A Board and Management Imperative

Although it is common to have the cyber risk oversight function fall to the audit committee, this should be carefully considered given the burden on audit committees. An alternative to consider, depending on the magnitude of the oversight responsibility, is the formation of a dedicated, cyber-specific board-level committee or sub-committee. At the same time, because cybersecurity considerations increasingly affect all operational decisions, they should be a recurring agenda item for full board meetings. Companies that already have standalone risk or technology committees should also consider where and how to situate cybersecurity oversight. The appointment of directors with experience in technology should be evaluated alongside board tutorials and ongoing director education on these matters. Robust management-level systems and reporting structures support effective board-level oversight, and enterprise-wide cybersecurity programs should be re-assessed periodically, including to ensure they flow through to individual business units and legacy assets as well as newly acquired or developed businesses.


Linux and open-source communities rise to Biden's cybersecurity challenge

This is not just a problem, of course, with open-source software. With open-source software, you can actually see the code so it's easier to make an SBOM. Proprietary programs, like the recently, massively exploited Microsoft Exchange disaster, are black boxes. There's no way to really know what's in Apple or Microsoft software. Indeed, the biggest supply-chain security disaster so far, the Solarwinds catastrophic failure to secure its software supply chain, was because of proprietary software chain failures. Besides SPDX, the Linux Foundation recently announced a new open-source software signing service: The sigstore project. Sigstore seeks to improve software supply chain security by enabling the easy adoption of cryptographic software signing backed by transparency log technologies. Developers are empowered to securely sign software artifacts such as release files, container images, and binaries. These signing records are then kept in a tamper-proof public log. This service will be free for all developers and software providers to use. The sigstore code and operation tooling that will make this work is still being developed.


DevOps didn’t kill WAF, because WAF will never truly die

WAFs are specific to each application and, therefore, require different protections. The filtering, monitoring, and policy enforcement (such as blocking malicious traffic) provide valuable protections but carry cost implications and consume computing resources. In a DevOps-fed cloud environment, it’s challenging to keep WAFs current with the constant flow of updates and changes. Introducing security into the CI/CD pipeline can solve that problem, but only for those apps being developed that way. It’s impossible to build security sprints into old third-party apps or applications deployed by different departments. The mere existence of those apps presents risk to the enterprise. They still need to be secured, and WAFs are likely still the best option. It’s also important to remember that no approach to cybersecurity will be perfect and that an agile DevOps methodology won’t be enough on its own. Even in an environment believed to be devoid of outdated or third-party apps, you can never be sure what other groups are doing or deploying—shadow IT is a persistent problem for enterprises.
 

Top 10 Latest Research On Brain Machine Interface

The Brain Machine interface is a study that captures this neural process to control external software and hardware. Though the technology is at its primary stages, these are the current possibilities of the Brain-Machine Interface. Brain-controlled wheelchair: A technique to ease the life of disabled people. With concentration, users will be able to navigate the wheelchair through familiar environments indoor. Brain-controlled Robotic ARM: A brainwave sensor is used to capture brain signals every time the user blinks, concentrates, meditates to put to use. The Robotic Arm is moved with an EEG sensor based on the brain data collected. Brain Keyboard: Oftentimes, paralyzed people fail to communicate with the surrounding environment. But that can be solved with a Brain Keyboard. EEG sensors will read the eye blink and the system will translate the text on display. Brain-controlled Helicopter: Can you imagine flying a helicopter with your brain? It’s possible. The helicopter can fly according to the pilot’s concentration and meditation, which will navigate the helicopter up and down. Brain-controlled password authentication: EEG can be applied in biometric identification as brain signals and patterns are unique for every individual.


Are business leaders scared by the public cloud?

Security and compliance are the biggest barriers to adoption, respectively. However, for the majority of business leaders, the cloud is more secure and easier to maintain compliance than on-premises. Only a tiny minority of decision-makers find that the public cloud is less capable in terms of both security and data compliance than on-premises. Although superior in terms of capability, switching to cloud-native security and compliance models is a struggle for some enterprises. However, almost everyone is planning on growing their cloud program… despite the concerns some have expressed about vendor lock-in. The vast majority of enterprises will continue on with their cloud journey, although around a third are predicted to go full-steam ahead, migrating “as quickly as is feasible”. This is by no means the case for all enterprises, though. Around half wish to migrate more cautiously. Vendor lock-in appears to also be a major issue for most enterprises. The majority of enterprises express that they are significantly concerned by the consequences of putting their all eggs in one cloud provider’s basket. Only a fearless few do not see this as a concern, and this is the way to go.


Why data and machine intelligence will become the new normal in insurance

In the next 3-5 years, the digital insurance consumer will likely remain the millennials, with higher levels of income and education. It is important though to not assume homogeneity and develop solutions based on lazily assessed group characteristics. Personalisation is more important now than it ever has been. Beyond functionality and ease of access, emotions and personal growth are key drivers in consumption behaviour and like in any other group, there are a diverse set of expectations and desires amongst this group. Tailoring services and online buying journeys to the individual rather than the group is paramount; in the same way that offering life insurance immediately following a bereavement could be viewed as inappropriate, so too an offer of a social insurance be offensive to a staunch individualist. Certain benefits, although appealing on the surface to members of 'the group' may not work at a more nuanced level – a donation with every policy bought to an environmental charity will not appeal to every millennial. 


Paying a Ransom: Does It Really Encourage More Attacks?

Although Phil Reitinger, a former director of the National Cyber Security Center within the Department of Homeland Security, doesn’t expect the pipeline company's apparent ransom payment to serve as a catalyst for other ransomware gangs, he acknowledges the impact the attack had on pipeline operations could encourage those interested in causing similar mayhem. "I don't see paying this particular ransom as that different from others, in the sense of opening up critical infrastructure as a target," he says. "Indeed, I expect there to be a reduction in criminal attacks on critical infrastructure as this ransomware gang now has a big target on its back," says Reitinger, who's now president and CEO of the Global Cyber Alliance. "However, the effectiveness of the attack may well increase the incentive for other actors who want to disrupt rather than cash a check." The ransomware-as-a-service gang behind DarkSide announced Thursday it was shutting down its operation after losing access to part of its infrastructure. A ransomware attack by a nation-state or highly competent gang, such as DarkSide, is almost impossible to stop, Maor says. But he points out that such attacks aren't easy to pull off.


Using Data as Currency: Your Company’s Next Big Advantage

Today’s world is increasingly data-driven, and companies are amassing unique data assets that have numerous and valuable implications for analytics, modeling, insights, personalization and targeting purposes. Most companies don’t know how to turn their mountains of data into real value for their business or their customers. But the companies that do are rewarded with market valuations that far exceed their peers. Amazon, Nike, Progressive, Hitachi, and others recognize that winning in a digitally driven world is about using data as currency, and the CIO and CTO are key to making that happen. But what does “data as currency” mean? For a while now, we have heard a number leaders claim, “data is the new oil”. ... Data’s flexibility arguably gives data even more value than oil and other currencies, assuming companies can leverage it properly. For instance, many product companies sit on customer interaction data that could better predict demand to optimize their manufacturing output and supply chains. Internal data on employee job assignments, self-driven trainings, and micro-experiences could optimize talent versus upcoming opportunities.


Implementing Microservicilities with Quarkus and MicroProfile

In a microservice architecture, we should develop with failure in mind, especially when communicating with other services. In a monolith application, the application, as a whole, is up or down. But when this application is broken down into a microservice architecture, the application is composed of several services and all of them are interconnected by the network, which implies that some parts of the application might be running while others may fail. It is important to contain the failure to avoid propagating the error through the other services. Resiliency (or application resiliency) is the ability for an application/service to react to problems and still provide the best possible result. ... Elasticity (or scaling) is something that Kubernetes had in mind since the very beginning, for example running kubectl scale deployment myservice --replicas=5 command, the myservice deployment scales to five replicas or instances. Kubernetes platform takes care of finding the proper nodes, deploying the service, and maintaining the desired number of replicas up and running all the time.



Quote for the day:

"To get a feel for the true essence of leadership, assume everyone who works for you is a volunteer." -- Kouzes and Posner