Daily Tech Digest - November 03, 2020

Why Securing Secrets in Cloud and Container Environments Is Important – and How to Do It

In containerized environments, secrets auditing tools make it possible to recognize the presence of secrets within source code repositories, container images, across CI/CD pipelines, and beyond. Deploying container services will activate platform and orchestrator security measures that distribute, encrypt and properly manage secrets. By default, secrets are secured in system containers or services — and this protection suffices in most use cases. However, for especially sensitive workloads — and Uber’s customer database backend service is a strong example, as are any data encryption or standard image scanning use cases — it’s not adequate to simply rely on conventional secret store security and secret distribution. These sensitive use cases call for more robust defense in depth protections. Within container environments, defense-in-depth implementations leverage deep packet inspection (DPI) and data leakage prevention (DLP) to enable secrets monitoring while they’re being used. Any transmission of a secret via network packets can be recognized, flagged and blocked if inappropriate. In this way, the most sensitive data can be effectively secured throughout the full container lifecycle, and attacks that could otherwise result in breach incidents can be thwarted due to this additional layer of safeguards.


Large-Scale Multilingual AI Models from Google, Facebook, and Microsoft

While Google's and Microsoft's models are designed to be fine-tuned for NLP tasks such as question-answering, Facebook has focused on the problem of neural machine translation (NMT). Again, these models are often trained on publicly-available data, consisting of "parallel" texts in two different languages, and again the problem of low-resource languages is common. Most models therefore train on data where one of the languages is English, and although the resulting models can do a "zero-shot" translation between two non-English languages, often the quality of such translations is sub-par. To address this problem, Facebook's researchers first collected a dataset of parallel texts by mining Common Crawl data for "sentences that could be potential translations," mapping sentences into an embedding space using an existing deep-learning model called LASER and finding pairs of sentences from different languages with similar embedding values. The team trained a Transformer model of 15.4B parameters on this data. The resulting model can translate between 100 languages without "pivoting" through English, with performance comparable to dedicated bi-lingual models.


How to Prevent Pwned and Reused Passwords in Your Active Directory

There are many different types of dangerous passwords that can expose your organization to tremendous risk. One way that cybercriminals compromise environments is by making use of breached password data. This allows launching password spraying attacks on your environment. Password spraying involves trying only a few passwords against a large number of end-users. In a password spraying attack, cybercriminals will often use databases of breached passwords, a.k.a pwned passwords, to effectively try these passwords against user accounts in your environment. The philosophy here is that across many different organizations, users tend to think in very similar ways when it comes to creating passwords they can remember. Often passwords exposed in other breaches will be passwords that other users are using in totally different environments. This, of course, increases risk since any compromise of the password will expose not a single account but multiple accounts if used across different systems. Pwned passwords are dangerous and can expose your organization to the risks of compromise, ransomware, and data breach threats. What types of tools are available to help discover and mitigate these types of password risks in your environment?


The practice of DevOps

Continuous integration (CI) is the process that aligns the code and build phases in the DevOps pipeline. This is the process where new code is merged with the existing structure, and engineers ensure that everything is working fine. The developers who make frequent changes to the code, update the same on the shared central code repository. The repository starts with a master branch, which is a long-term stable branch. For every new feature, a new branch is created, and the developer regularly (daily) commits his code to this branch. After the development for a feature is complete, a pull request is created to the release branch. Similarly, a pull request is created to the master branch and the code is merged. We have seen slight variations to these practices across organisations. Sometimes the developers maintain a fork, or copy, of the central repository. This limits the merge issues to their own fork and isolates the central repository from the risk of corruption. Sometimes, the new branches don’t branch out from the feature branch but from the release or master branch. Small and mid-size companies often use open sites like GitHub for their code repository, while larger firms use Bitbucket as their code repository, which is not free.


The 5 Biggest Cloud Computing Trends In 2021

Currently, the big public cloud providers - Amazon, Microsoft, Google, and so on – take something of a walled garden approach to the services they provide. And why not? Their business model has involved promoting their platforms as one-stop-shops, covering all of an organization's cloud, data, and compute requirements. In practice, however, industry is increasingly turning to hybrid or multi-cloud environments (see below), with requirements for infrastructure to be deployed across multiple models. ... As far as cloud goes, AI is a key enabler of several ways in which we can expect technology to adapt to our needs throughout 2021. Cloud-based as-a-service platforms enable users on just about any budget and with any level of skill to access machine learning functions such as image recognition tools, language processing, and recommendation engines. Cloud will continue to allow these revolutionary toolsets to become more widely deployed by enterprises of all sizes and in all fields, leading to increased productivity and efficiency. ... Amazon most recently joined the ranks of tech giants and startups offering their own platform for cloud gaming. Just as with music and video streaming before it, cloud gaming promises to revolutionize the way we consume entertainment media by offering instant access to vast libraries of games that can be played for a monthly subscription.


Quantum computers are coming. Get ready for them to change everything

The challenge lies in building quantum computers that contain enough qubits for useful calculations to be carried out. Qubits are temperamental: they are error-prone, hard to control, and always on the verge of falling out of their quantum state. Typically, scientists have to encase quantum computers in extremely cold, large-scale refrigerators, just to make sure that qubits remain stable. That's impractical, to say the least. This is, in essence, why quantum computing is still in its infancy. Most quantum computers currently work with less than 100 qubits, and tech giants such as IBM and Google are racing to increase that number in order to build a meaningful quantum computer as early as possible. Recently, IBM ambitiously unveiled a roadmap to a million-qubit system, and said that it expects a fault-tolerant quantum computer to be an achievable goal during the next ten years. Although it's early days for quantum computing, there is still plenty of interest from businesses willing to experiment with what could prove to be a significant development. "Multiple companies are conducting learning experiments to help quantum computing move from the experimentation phase to commercial use at scale," Ivan Ostojic, partner at consultant McKinsey, tells ZDNet.


How enterprise architects and software architects can better collaborate

After the EAs have developed the enterprise architecture map, they should share these plans with software architects across each solution, application, or system. After all, it’s important that the software architect who works most closely with the solution shares their own insight and clarity with the enterprise architecture, who is more concerned with high-level architecture. Software architects and EAs can collaborate and suggest changes or improvements based on the existing architecture. Software architects can then go in and map out the new architecture based on the business requirements. Non-technical leaders can gain a better understanding and this can lead to quicker alignment. Software architects can evaluate the quality of architecture and share their learnings with the enterprise architect, who can then incorporate findings into the enterprise architecture. Software architects are also largely interested in standardization and they can help enterprise architects scale that standardization across the business. Once the EA has developed a full model or map, it’s easier to see where assets can be reused. Software architects can recommend standardization and innovation and weigh in on the EA’s suggestions of optimizing enterprise resources.


Dealing with Psychopaths and Narcissists during Agile Change

Many of the techniques or practices you use with healthy people do not work well with psychopaths or narcissists. For example, if you are using the Scrum framework, it is very risky to include a toxic person as part of a retrospective meeting. Countless consultants also believe that coaching works with most folks. However, the psychopathic person normally ends up learning the coach’s tools and manipulating him or her for their own purpose. This obviously aggravates the problem. ... From an organizational point of view, these toxic people are excellent professionals, because they look like they perform almost any task successfully. This helps a company to “tick off” necessary accomplishments in the short-term to increase agile maturity, managers to get their bonuses, and the psychopath to obtain greater prestige. Obviously, these things are not sustainable, and what seems to be agility is transformed in the medium term into fragility and loss of resilience. Agile also requires—apart from good organizational health—execution with purpose and visions and goals that involve feelings and inspire people to move forward.


Responsible technology: how can the tech industry become more ethical?

Three priorities right now should be crafting smart regulations, increasing the diversity of thought in the tech industry (i.e. adding more social scientists), and bridging the gulf between those that develop the technology and those that are most impacted by its deployment. If we truly want to impact behavior, smart regulations can be an effective tool. Right now, there is often a tension between “being ethical” and being successful from a business perspective. In particular, social media platforms typically rely on an ad-based business model where the interests of advertisers can run counter to the interests of users. Adding diverse thinkers to the tech industry is important because “tech” should not be confined to technologists. What we are developing and deploying is impacting people, which heightens the need to incorporate disciplines that naturally understand human behavior. By bridging the gulf between the individuals developing and deploying technology and those impacted by it, we can better align our technology with our individual and societal needs. Facial recognition is a prime example of a technology that is being deployed faster than our ability to understand its impact on communities.


Enterprise architecture has only begun to tap the power of digital platforms

The problem that enterprises have been encountering, Ross, says, is getting hung up at stage three. “We observed massive failures in business transformations that frankly were lasting six, eight, 10 years. It’s so hard because it’s an exercise in reductionism, in tight focus,” she said. “We recommend that companies zero in on their single most important data. this is the packaged data you keep – the customer data, the supply chain data… this is the thing that matters most. If they get this right, then things will take off.” The challenge is now moving past this stage, as in the fourth stage, “we actually understand now that what’s starting to happen is we can start to componentize our business,” says Ross. She estimates that only about seven percent of companies have reached this stage. “This is not just about plugging modules into this platform, this is about recognizing that any product or process can be decomposed into people, process and technology bundles. And we can assign individual teams or even individuals’ accountability for one manageable piece that that team can keep up to date, improve with new technology, and respond to customer demand.”



Quote for the day:

"Let him who would be moved to convince others, be first moved to convince himself." -- Thomas Carlyle

Daily Tech Digest - November 02, 2020

India needs IoT security standards

The bigger issue is most of the sectors using digital technologies or integrating emerging technologies do not have a digital risk element defined by the sectoral regulators till date. A lack of National cyber strategy highlighting the key risk to these sectors is still awaiting cabinet nod. Hence, fighting ransomware, advanced persistent threats and malware is becoming tough for the industry, which doesn’t have a framework to rely upon to test or audit their systems. Earlier this year, the European body, ETSI, released consumer IoT security standard. The standard specifies high-level security and data protection provisions for consumer IoT devices which includes IoT gateways, base stations and hubs, smart cameras, TV, smart washing machines, wearables, health trackers, home automation systems, connected gateways, refrigerators, door lock and window sensors. This standard provides a minimum baseline for securing devices and sets provisions for consumer IoT. It lays the foundation for setting strong password controls for IoT devices by stating all consumer IoT device passwords must be unique. In India, and across the world, we see consumer IoT devices getting sold with universal default usernames and passwords.


How To Build Your Own Chatbot Using Deep Learning

Before jumping into the coding section, first, we need to understand some design concepts. Since we are going to develop a deep learning based model, we need data to train our model. But we are not going to gather or download any large dataset since this is a simple chatbot. We can just create our own dataset in order to train the model. To create this dataset, we need to understand what are the intents that we are going to train. An “intent” is the intention of the user interacting with a chatbot or the intention behind each message that the chatbot receives from a particular user. According to the domain that you are developing a chatbot solution, these intents may vary from one chatbot solution to another. Therefore it is important to understand the right intents for your chatbot with relevance to the domain that you are going to work with. Then why it needs to define these intents? That’s a very important point to understand. In order to answer questions, search from domain knowledge base and perform various other tasks to continue conversations with the user, your chatbot really needs to understand what the users say or what they intend to do. That’s why your chatbot needs to understand intents behind the user messages (to identify user’s intention).


Finnish government rolls out digital projects to support SMEs

Finland’s plan aims to use the EDIH-model to strengthen the digital capabilities of companies across the country. The central strategy is based on helping Finnish SME enterprises to easily access, exploit and profit from the more extensive use of business impacting technologies such as artificial intelligence (AI), robotics and high-performance computing. The design constriction of the Finnish NDIH plan means it can link directly in to the EU’s EDIH knowledge base and network. The prospective Finnish hubs will have the built-in ability to accelerate the digital transformation not just in Finland but also on a wider EU level. Earlier this year, the Finnish government rolled-out an open survey scheme to measure interest from the country’s technology sector in these network projects. The survey was designed to help determine which Finnish tech actors may be qualified to join the hubs. The results from the information survey will amplify the Finnish government ministry’s ability to develop a national framework. Moreover, it will allow it to organise and complete an application round for Finland’s candidates to the Eurpean-wide hub by year-end 2020.


Data Privacy is a Brand Reputation Issue, Not a Compliance Issue

First, establish a privacy policy that is legal and ethical. Then, communicate the privacy and data utilization policy clearly without all the obfuscating legalese. Demonstrate that consumer data will be collected only by honest and clear, not coercive, or deceptive consent. Next, create personal data pods for consumers on their own devices, so they can download and control their data easily in a usable, structured format instantly. Ask consumers if the brand can use non-invasive Edge AI tools to provide them with learnings and key insights into themselves that improve their lives without violating their desires or values. Use those insights to generate predictions and recommendations that enhance the consumer’s life, that serve their best interests, and are delivered with emotionally intelligent humanity. Continuously seek, and embrace, candid consumer feedback on the actual value being created. Then, as real value is generated for the consumer using their available data, and the consumer’s trust increases, open up an honest and fiduciary dialogue about what additional data is needed in order to create even more personalized value. As more and more data is accumulated from the more trusting consumers, such as location, browsing, and other real-time data, reach out to the hesitant consumers and share use cases.


What Are The Fastest Growing Cybersecurity Skills In 2021?

Cybersecurity professionals with cloud security skills can gain a $15,025 salary premium by capitalizing on strong market demand for their skills in 2021. DevOps and Application Development Security professionals can expect to earn a $12,266 salary premium based on their unique, in-demand skills. 413,687 job postings for Health Information Security professionals were posted between October 2019 to September 2020, leading all skill areas in demand. Cybersecurity's fastest-growing skill areas reflect the high priority organizations place on building secure digital infrastructures that can scale. Application Development Security and Cloud Security are far and away from the fastest-growing skill areas in cybersecurity, with projected 5-year growth of 164% and 115%, respectively. This underscores the shift from retroactive security strategies to proactive security strategies. According to The U.S. Bureau of Labor Statistics' Information Security Analyst's Outlook, cybersecurity jobs are among the fastest-growing career areas nationally. The BLS predicts cybersecurity jobs will grow 31% through 2029, over seven times faster than the national average job growth of 4%.


Q&A on the Book Accelerating Software Quality

There are multiple challenges that can be divided across test automation creation and maintenance, test reporting and analysis, test management, testing trends, and debugging. Traditional tools are not efficient enough to provide practitioners with reliable, robust, and maintainable test scripts. Test automation scripts keep breaking upon developers’ code changes made to the apps, or elements on the app that aren’t properly recognized by the test automation framework. Ongoing maintenance of scripts is also a challenge that causes lots of false negatives and noise that drills into the CI pipeline. As test execution scales, large test data accumulates and needs to be sliced and diced to find the most relevant issues. Here, traditional tools are limited in filtering big test data and providing data-driven smart decisions, trends, root cause of failures, and more. Lastly, the time it takes to create a new script that is code based, and debug it, is way too long to fit into today’s aggressive timelines. Hence, AI and ML are in a great position to close this gap by automatically generating test code and maintaining it through self-healing methods.


How three simple steps will help you save on your cloud spending

The first crucial step is to get hold of the data in a format that will help you understand how money is being spent. This will allow you to put guard rails up to protect against overspending. To do this, you’ll need to utilise tools that break down the usage data. Otherwise, you will just receive a bill that looks like a long shopping list – that is almost impossible to decipher. AWS does offer tools that can help here, but you can also add third party solutions like CloudCheckr to collate this information and play it back to you, with actionable metrics. This will also alert you to any under used resources that could be turned off. You will also want to ensure you implement a consistent tagging strategy across all of your cloud estate. This will allow you to break spending down to the individual customer, department, team and even developer. It will also help you understand where you’re getting a good return on investment and where you might need to be more efficient. It’s essential that you go native in public cloud infrastructures – otherwise you will miss key features that allow you to automate and orchestrate. For example, you should be taking advantage of features that will automate day to day management tasks, such as software updates and back-ups.


Banking Must Reduce AI Talent Gap for Digital Transformation Success

Using outside talent to improve productivity and results with data and AI technology is definitely a valid path in the short-run, especially as most banks and credit unions play catch-up in the race to leverage data insights across the organization. But, as mentioned, the ability to deploy the insights to specific product and service needs requires the experience of those who have known the business for years. Without the involvement of the users of the data and AI results, technology is deployed in a vacuum. Marketing managers need to understand the targeting and personalization methodology the models create. Product managers must understand the changes to processes and procedures that is recommended by AI technologies to ensure all of the required steps are in place for compliance purposes. And, risk managers need to feel comfortable that the assumptions made by models continue to reflect the cybersecurity requirements of the organization. The need to upgrade the skills of the consumers of data and AI solutions usually is done by training existing employees of the organization. This is usually a much more efficient and less disruptive process than trying to train technology people the internal intricacies of an organization. 


Tackling multicloud deployments with intelligent cloud management

Looking at what the public cloud providers offer in terms of a control plane for managing workloads, McQuire says Azure focuses on hybrid to edge and on-premise workload management. “Google Cloud has been a latecomer in the enterprise and going full board with cloud management,” he adds. For the moment, says McQuire, the focus is on orchestration and control, security and governance. He says this is a reflection of where IT organisations are in terms of how they are using multiple public clouds. “There is a need to understand the economic impact of moving workloads around,” he says. “Not only do you have a need to understand the performance of different IT environments, whether to deploy on-premise, in a private cloud or use one of the three public clouds, there is also a requirement to understand the economics associated with those decisions.” It is now not uncommon for IT decision-makers to standardise on one public cloud for specialist workloads such as artificial intelligence (AI), and use another for infrastructure as a service (IaaS). McQuire adds: “Two years ago, companies started running machine learning workloads with a single cloud provider. ..."


How Service Mesh Enables a Zero-Trust Network

To safeguard user data, organizations are adopting a zero-trust security model. “Zero trust security means that we’re not trusting anybody,” said Palladino. “We don’t trust our own services. We don’t trust our own team members.” Placing too much trust in users, services and teams could cause a catastrophic failure. And, “there is no bigger risk than thinking you are secure, while in reality, you are not,” he said. ... Implementing these permissions is where a service mesh comes in. The application teams often build security, yet it’s generally bad practice to build your own cybersecurity. Security for microservices requires high expertise, and standardizing connectivity between various microservices can easily result in fragmented security implementations. Instead of building your own security infrastructure, Palladino recommended utilizing service mesh. By using service mesh as a control plane for microservices, platform architects can specify specific rules and attributes to generate an identity on a per-service basis. Service mesh also removes the burden of networking from developers, enabling them to focus more on their core logic. “Application teams become consumers of connectivity, as opposed to the makers to this connectivity,” Palladino noted.



Quote for the day:

"You can't use up creativity. The more you use, the more you have." -- Maya Angelou

Daily Tech Digest - November 01, 2020

Why is Site Reliability Engineering Important?

“The term SRE surely has been introduced by Google, but directly or indirectly several companies have been doing stuff related to SRE for a long time, though I must say that Google gave it a new direction after coining the term ‘SRE.’ I have a clear view on SRE as I believe it walks hand-in-hand with DevOps. All your infrastructure, operations, monitoring, performance, scalability and reliability factors are accounted for in a nice, lean and automated system (preferably); however this is not enough. Culture is an important aspect driving the SRE aspects, along with business needs. As the norm ‘to each, his own’ goes, SRE is no different. It is easy to get inspired from pioneer companies, but it’s impossible to copy their culture and means to replicate the success, especially with your ‘anti-patterns’ and ‘traditional’ remedial baggage. Do you have similar infrastructure and business needs as the company showcasing brilliant success with SRE? No. Can it help you? Absolutely. The key factor here is to recognize what is important to your success blueprint after understanding the fundamentals of it and find your own success factors considering your cultural needs. Your strategy and culture need to walk together, just like your guiding (strategy) and driving (culture) factors.”


AI in Healthcare — Is the Future of Healthcare already here?

Through a series of neural networks, AI is helping healthcare providers achieve this balance. Facial recognition software is combined with machine learning to detect patterns in facial expressions that point us towards the possibility of a rare disease. Moon developed by Diploid enables early diagnosis of rare diseases through the software, allowing doctors to begin early treatment. Artificial Intelligence in Healthcare carries special significance in detecting rare diseases earlier than they usually could be. ... Health monitoring is already a widespread application of AI in Healthcare. Wearable health trackers such as those offered by Apple, Fitbit, and Garmin monitor activity and heart rates. These wearables are then in a position to send all of the data forward to an AI system, bringing in more insights and information about the ideal activity requirement of a person. These systems can detect workout patterns and send alerts when someone misses out their workout routine. The needs and habits of a patient can be recorded and made available to them when need be, improving the overall healthcare experience. For instance, if a patient needs to avoid heavy cardiac workout, they can be notified of the same when high levels of activity are detected.


Why kids need special protection from AI’s influence

Algorithms can change the course of children’s lives. Kids are interacting with Alexas that can record their voice data and influence their speech and social development. They’re binging videos on TikTok and YouTube pushed to them by recommendation systems that end up shaping their worldviews. Algorithms are also increasingly used to determine what their education is like, whether they’ll receive health care, and even whether their parents are deemed fit to care for them. Sometimes this can have devastating effects: this past summer, for example, thousands of students lost their university admissions after algorithms—used in lieu of pandemic-canceled standardized tests—inaccurately predicted their academic performance. Children, in other words, are often at the forefront when it comes to using and being used by AI, and that can leave them in a position to get hurt. “Because they are developing intellectually and emotionally and physically, they are very shapeable,” says Steve Vosloo, a policy specialist for digital connectivity at Unicef, the United Nations Children Fund. Vosloo led the drafting of a new set of guidelines from Unicef designed to help governments and companies develop AI policies that consider children’s needs. 


A new threat matrix outlines attacks against machine learning systems

Mikel Rodriguez, a machine learning researcher at MITRE who also oversees MITRE’s Decision Science research programs, says that we’re now at the same stage with AI as we were with the internet in the late 1980s, when people were just trying to make the internet work and when they weren’t thinking about building in security. We can learn from that mistake, though, and that’s one of the reasons the Adversarial ML Threat Matrix has been created. “With this threat matrix, security analysts will be able to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning,” he noted. Also, the matrix will help them think holistically and spur better communication and collaboration across organizations by giving a common language or taxonomy of the different vulnerabilities, he says. “Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle,” MITRE noted.


Understanding the modular monolith and its ideal use cases

Conventional monolithic architectures focus on layering code horizontally across functional boundaries and dependencies, which inhibits their ability to separate into functional components. The modular monolith revisits this structure and configures it to combine the simplicity of single process communication with the freedom of componentization. Unlike the traditional monolith, modular monoliths attempt to establish bounded context by segmenting code into individual feature modules. Each module exposes a programming interface definition to other modules. The altered definition can trigger its dependencies to change in turn. Much of this rests on stable interface definitions. However, by limiting dependencies and isolating data store, the architecture establishes boundaries within the monolith that resemble the high cohesion and low coupling found in a microservices architecture. Development teams can start to parse functionality, but can do so without worrying about the management baggage tied to multiple runtimes and asynchronous communication. One benefit of the modular monolith is that the logic encapsulation enables high reusability, while data remains consistent and communication patterns simple.


What's Wrong With Big Objects In Java?

There are several ways to fix or at least mitigate this problem: tune the GC, change the GC type, fix the root cause, or upgrade to the newer JDK. Tuning GC, in this case, means increasing the heap or increasing the region size with -XX:G1HeapRegionSize so that previously Humongous objects are no longer Humongous and follow the regular allocation path. However, the latter will decrease the number of regions, that may negatively affect GC performance. It also means coupling GC options with the current workload (which may change in the future and break your current assumptions). However, in some situations, that's the only way to proceed. A more fundamental way to address this problem is to switch to the older Concurrent Mark-Sweep (CMS) garbage collector, via the -XX:+UseParNewGC -XX:+UseConcMarkSweepGC flags (unless you use one of the most recent JDK versions in which this collector is deprecated). CMS doesn't divide the heap into numerous small regions and thus doesn't have a problem handling several-MB objects. In fact, in relatively old Java versions CMS may perform even better overall than G1, at least if most of the objects that the application creates fall into two categories: very short-lived and very long-lived.


Analysis: Tactics of Group Waging Attacks on Hospitals

UNC1878 has recently changed some of its tactics. For example, it no longer uses Sendgrid to deliver the phishing emails and to supply the URLs that lead to the malicious Google documents, Mandiant reports. "Recent campaigns have been delivered via attacker-controlled or compromised email infrastructure and have commonly contained in-line links to attacker-created Google documents, although they have also used links associated with the Constant Contact service," according to the Mandiant report. Hosting the malicious documents on a legitimate service is also a new twist. Earlier campaigns were hosted on a compromised infrastructure, Mandiant researchers say. Once the group delivers a loader via a malicious document, it downloads the Powertrick backdoor and/or Cobalt Strike Beacon payloads to establish a presence and to communicate with the command-and-control server, the report says. Mandiant notes that the group uses Powertrick infrequently, perhaps for establishing a foothold and performing initial network and host reconnaissance. ... The group maintains persistence by creating a scheduled task, adding itself to the startup folder as a shortcut, creating a scheduled Microsoft BITS job using /setnotifycmdline and in some cases using stolen login credentials, the report says.


The secret to designing a positive future with AI? Imagination

Focusing on the positive is key to steering toward a positive destination. Instead of being passive passengers in a collective spaceship erring towards dangerous planets, we can instead actively move in the direction of the outcomes we want, such as full employment and equity. This is, at its heart, an exercise in vision. To be sure, realizing that vision will require a commitment to idealism, hope, and an openness towards change and uncertainty. But the vision is paramount and will set our future course. ... Building such a vision is a collective intelligence exercise that requires many voices from around the world. In taking this step, we can empower participants from various backgrounds and countries to make this vision real and identify the implications of that long-term vision for present-day policy decisions Such work can seem like a creative writing prompt but was actually a key exercise undertaken by the World Economic Forum’s Global AI Council (GAIC), a multi-stakeholder body that includes leaders from the public and private sectors, civil society and academia. In April 2020, we began pursuing an ambitious initiative called Positive AI Economic Futures, taking as its starting point the hypothesis that AI systems will eventually be able to do the great majority of what we currently call work, including all forms of routine physical and mental labour.


How Kubernetes extends to machine learning (ML)

The scalability of Kubernetes, alongside the flexibility of ML, can allow developers within the open source space to innovate without experiencing strain on their workloads. Thomas Di Giacomo, president of engineering and innovation at SUSE, explained: “Kubernetes and cloud native technologies enable a broad selection of applications because they serve as a reliable connecting mechanism for a multitude of open source innovations, ranging from supporting various types of infrastructures and adding AI and ML capabilities to help make developers’ lives simpler and business applications more streamlined. “Kubernetes facilitates fast, simple management and clear organisation of containerised services and applications. The technology also enables the automation of operational tasks, like, application availability management and scaling. “There’s no denying that AI and ML technologies will have a massive impact on the open source market. Developed by the community, AI open source projects will help to develop and train ML models, and will provide a powerful feedback loop that will enable faster innovation. “We have already witnessed that at SUSE, and having been working and developing AI ML solutions together with Kubernetes to streamline their use by data scientists who can then focus on their own needs and processes rather than the mechanics.”


REPORT: Consumer Privacy Concerns Demand Regulatory Compliance

Data privacy is gaining more attention from consumers in multiple markets, including the European Union and the United States. A study of U.S. consumers found that 87 percent feel data privacy should be considered a human right, for example. Many respondents are also wary of what businesses are doing with their information, with roughly 70 percent of consumers stating that they do not trust companies to sell their data ethically. Such trends are leading businesses and regulators to reconsider the country’s existing data privacy and security standards. Data security is also being debated in the EU. The region’s General Data Protection Regulation (GDPR) that governs online data collection and storage has been in place for approximately two years, but large companies are more frequently coming under regulatory scrutiny as consumers become more familiar with the rule. Google-owned video streaming service YouTube, for example, is currently facing a lawsuit over whether its data practices violate GDPR. A suit alleges that the platform fails to comply with GDPR because it collects data from minors, who cannot legally consent to sharing their digital information under the regulation GDPR.



Quote for the day:

'Risks are the seeds from which successes grow." -- Gordon Tredgold

Daily Tech Digest - October 31, 2020

Six frightening data stories that will give you nightmares

Acarophobia is a fear of tiny, crawling, parasitic insects; apiphobia is a fear of bees; and arachnophobia, a fear of spiders. But what is the term for a phobia of those beastly bugs that can bring down an entire server? This happened at a London advertising agency! The creative team had an important customer deadline to meet and they could no longer access their critical Adobe illustrator data and other large creative files. The disaster recovery plan would take two days to restore the data. One day after the job deadline. The clock was ticking… The problem was the recovery time objective (RTO) set up years ago, and because the longer the RTO, the lower the price, this firm thought a shorter RTO wasn’t worth it. But don’t be fooled when it comes to protecting your business-critical data, for there’s always a price to pay… You have friends coming for a Halloween party and arrive home from the supermarket, bags full of decorations, drinks, and ice, only to find that you don’t have your house key. No doubt workers who had planned to work on some company files only to realise they cannot access them when working from home feel the same way, especially during this Covid-19 pandemic. Users may be completely locked out of their data files, but more often, they face a tedious and clunky experience to access those files.


Honeywell introduces quantum computing as a service with subscription offering

The H1 has been up and running for several months internally at Honeywell, but has been in use by customers for about three weeks, said Uttley. Honeywell has been working with eight enterprise customers, including DHL, Merck, and JP Morgan Chase. Some of those customers had been working on the H0 system and were able to easily "port over" work to the new machine, said Uttley. One reason for the subscription is that there is still substantial hand-holding that happens. Those windows of time include participation with the customer by Honeywell quantum theorists, and Honeywell operations teams, who work "hand in hand" with customers. The hands-on approach of Honeywell to customer subscriptions makes sense given that much of the work that customers will be doing initially is to gain a sense of trust, said Uttley. They will be seeing what results they get from the quantum computer and matching those to the same work on a classical computer, to validate that the quantum system produces correct output. On top of the blocks of dedicated time, each subscriber can get queueing time, said Uttley, where jobs are processed as capacity is available.


JPM Coin debut marks start of blockchain’s value-driven adoption cycle

In a recent interview, JP Morgan’s global head of wholesale payments stated that the launch of JPM Coin as well as certain other “behind the scenes moves” prompted the banking giant to create a new business outfit called Onyx. The unit will allow the company to spur its focus on its various ongoing blockchain and digital currency efforts. Onyx reportedly has more than 100 staff members and has been established with the goal of commercializing JP Morgan’s various envisioned blockchain and crypto projects, moving existing ideas from their research and development phase to something more tangible. When asked about their future plans and if crypto factors majorly into the company’s upcoming scheme of things, a media relations representative for J.P. Morgan told Cointelegraph that there are no additional announcements on top of what was already unveiled recently. Lastly, on Oct. 28, the bank announced that it was going to rebrand its blockchain-based Interbank Information Network, or IIN, to “Liink” as well as introduce two new applications — Confirm and Format — that have been developed for specific purposes of account validation and fraud elimination for its clients. Liink will be a part of the Onyx ecosystem and will enable participants to collaborate with one another in a seamless fashion.


What is DevOps? with Donovan Brown

“DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.” – Donovan Brown. Why we do “DevOps” comes down to that one big word Donovan highlights… value. Our customers want the services we provide to them to always be available, to be reliable, and to let them know if something is wrong. These are the same expectations we should all take when working together to deliver the application or service our end user will experience. By producing an environment that values a common goal amongst our team, we can see greater productivity and success for our users. Donovan Brown opens the “Deliver DevOps” event at the Microsoft Reactor in the UK by taking us for a lap around Azure DevOps concluding with the announcement of a new UK geo to store your Azure DevOps data. What is DevOps? This is a question that seems to be constantly debated. Is it automation? Is it culture? Is DevOps a team? Is DevOps a philosophy? All great things to ask. By looking to DevOps, teams are able to provide the most value for their customers. In this video, Donovan Brown, Principal DevOps Manager at Microsoft, gives us what the Microsoft definition of DevOps in just a few minutes.


Driving remote workforce efficiency with IoT security

As with all cybersecurity issues, no “one size fits all” approach to IoT security exists. At the core, the IoTSCF provides guidance across compliance classes. However, it does set some specific minimum requirements for all IoT devices. Among these security controls, the IoTSCF suggests: Having an internal organizational member who owns and is responsible for monitoring the security; Ensuring that this person adheres to the compliance checklist process; Establishing a policy for interacting with internal and third-party security researchers; Establishing processes for briefing senior executives in the event the IoT device leads to a security incident; Ensuring a secure notification process for notifying partners/users; and Incorporating IoT and IoT-based security events as part of the Security Policy. From a hardware and software perspective, the following suggestions guide all compliance classes: Ensuring the product’s processor system has an irrevocable hardware Secure Boot process; Enable the Secure Boot process by default; Ensure the product prevents the ability to load unauthenticated software and files; Ensure that devices supporting remote software updates incorporate the ability to digitally sign software images ...


Why 2021 will be the year of low-code

Low-code will make it to the mainstream in 2021 with 75% of development shops adopting this platform, according to Forrester's 2021 predictions for software development. This shift is due in part to the new working environment and product demands caused by the COVID-19 crisis. Forrester analysts found that "enterprises that embraced low-code platforms, digital process automation, and collaborative work management reacted faster and more effectively than firms relying only on traditional development." ... Forrester analysts also noted the importance of adjusting communication habits and workflows in the new year. The report notes that teams that had already invested in high-trust culture, agile practices, and cloud platforms found it easier to adapt to 100% remote work. Teams that relied on a command-and-control approach to work and older platforms struggled to adjust to this new environment. ... This will require sustained attention and active management to make this happen: "Keeping developers out of endless virtual meetings while maintaining governance will particularly challenge organizations in regulated industries, and they will embrace value stream management as a way of maintaining data-informed insights and collecting process metrics that enable compliance and governance at scale."


Artificial Intelligence Is Modernizing Restaurant Industry

While technology is growing and benefiting many industries, certain industries are still struggling to survive. One such industry experiencing the battle of endurance amidst its peers is the restaurant business. 52% of the restaurant proprietors have consented to the fact that high operating and food costs appear to be the top difficulties that come their way while dealing with their business. Restaurants can undoubtedly keep steady over everything by the legitimate implementation of technology into their business. One such technology which is said to have some critical impact on this industry specialty is artificial intelligence. Almost certainly, there are various advantages of implementing artificial intelligence in restaurants like improved customer experience, more sales, less food wastage, and so forth. ... The climate is an important factor in restaurant sales. Studies show that 7 out of 10 restaurants state that weather forecasts affect their sales. Perhaps it’s bright and an ideal day to enjoy a sangria on a yard with friends, or possibly it’s cold and desolate outside and you feel like having hot cocoa at a cozy bistro. Regardless of whether it’s bright, shady, rainy, snowy or hotter than expected, customers are attracted to specific foods and beverages dependent on the conditions outside.


Flipping the Odds of Digital Transformation Success

The technology is important, but the people dimension (organization, operating model, processes, and culture) is usually the determining factor. Organizational inertia from deeply rooted behaviors is a big impediment. Failure should not be an option, and yet it is the most common result. The consequences in terms of investments of money, organizational effort, and elapsed time are massive. Digital laggards fall behind in customer engagement, process efficiency, and innovation. In contrast, companies that are successful in mastering digital technologies, establishing a digital mindset, and implementing digital ways of working can reach a new rhythm of continuous improvement. Digital, paradoxically, is not a binary state, but one of ongoing innovation as new waves of disruptive technologies are released to the market. Consider, for example, artificial intelligence, blockchain, the Internet of Things, spatial computing, and, in time, quantum computing. Unsuccessful companies will find it extremely hard to leverage these advances, while digital organizations will be innovating faster and pulling further away from digital laggards—heading for that bionic future. Digital transformations can define careers as well as companies.


SREs: Stop Asking Your Product Managers for SLOs

One of the fundamental premises of software reliability engineering is that you should base your reliability goals—i.e., your service level objectives (SLOs)—on the level of service that keeps your customers happy. The problem is, defining what makes your customers happy requires communication between software reliability engineers (SREs) and product managers (PMs) (aka business stakeholders), and that can be a challenge. Let’s just say that SREs and PMs have different goals and speak slightly different languages. It’s not that PMs fail to appreciate the value that SREs bring to the table. Today, in the era of software as a service, features such as security, reliability and data privacy are respected as critical features of the service-product a SaaS company delivers. Modern application users and customers of software services care a lot about data privacy, cybersecurity and uptime; therefore, PMs care, too. In fact, it’s not uncommon to see these features touted prominently on a company’s website because the folks in marketing know that customers are making purchasing decisions based on whether the company can deliver reliability, speed, security and performance quality. So, yes, PMs do care.


How to improve the developer experience

Developers come into a software project motivated, but it doesn't take long for that energy to get sapped. "[Onboarding] is where I feel most developers lose their initial spurt of motivation," said Chris Hill, senior manager of software development at T-Mobile. An inherited software project comes with immediate barriers to productivity, such as lacking or obscure documentation and the time a developer wastes waiting for access to the code repository and dev environment. Once work begins, the developer must grasp what the code means, how it delivers value and all the tools that are part of the dev cycle. "Every [inherited project] feels like I stepped in the middle of an IKEA build cycle, and all the parts are missing, and there are no instructions, and there's no support line, and all the screws are stripped, and I have pressure that I should come out with my first feature next week," Hill said. At T-Mobile, Hill prioritizes developer experience, which is comparable to user experience but specific to developers' work. A positive developer experience is one in which programmers can easily access the tools or resources they need and apply their expertise without unnecessary constraints.



Quote for the day:

"Remember teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni

Daily Tech Digest - October 30, 2020

The future of IoT: 5 major predictions for 2021

Certainly COVID-19 continues to globally plague, and the research predicts that connected device makers will double efforts for healthcare. But COVID-19 forced many of those who were ill to stay at home or delay necessary care. This has left chronic conditions unmanaged, cancers undetected, and preventable conditions unnoticed.  "The financial implications of this loom large for consumers, health insurers, healthcare providers, and employers." Forrester's report stated. There will be a surge in interactive and proactive engagement such as wearables and sensors, which can detect a patient's health while they are at home. Post-COVID-19 healthcare will be dominated by digital-health experiences and will improve the effectiveness of virtual care. The convenience of at-home monitoring will spur consumers' appreciation and interest in digital health devices as they gain greater insight into their health. Digital health device prices will become more consumer friendly. The Digital Health Center of Excellence, established by the FDA, is foundational for the advancement and acceptance of digital health. A connected health-device strategy devised by healthcare insurers will tap into data to improve understanding of patient health, personalization, and healthcare outcomes.


A Bazar start: How one hospital thwarted a Ryuk ransomware outbreak

We’ve been following all the recent reporting and tweets about hospitals being attacked by Ryuk ransomware. But Ryuk isn’t new to us… we’ve been tracking it for years. More important than just looking at Ryuk ransomware itself, though, is looking at the operators behind it and their tactics, techniques, and procedures (TTPs)—especially those used before they encrypt any data. The operators of Ryuk ransomware are known by different names in the community, including “WIZARD SPIDER,” “UNC1878,” and “Team9.” The malware they use has included TrickBot, Anchor, Bazar, Ryuk, and others. Many in the community have shared reporting about these operators and malware families (check out the end of this blog post for links to some excellent reporting from other teams), so we wanted to focus narrowly on what we’ve observed: BazarLoader/BazarBackdoor (which we’re collectively calling Bazar) used for initial access, followed by deployment of Cobalt Strike, and hours or days later, the potential deployment of Ryuk ransomware. We have certainly seen TrickBot lead to Ryuk ransomware in the past. This month, however, we’ve observed Bazar as a common initial access method, leading to our assessment that Bazar is a greater threat at this time for the eventual deployment of Ryuk.


Getting started with DevOps automation

We often think of the term “DevOps” as being synonymous with “CI/CD”. At GitHub we recognize that DevOps includes so much more, from enabling contributors to build and run code (or deploy configurations) to improving developer productivity. In turn, this shortens the time it takes to build and deliver applications, helping teams add value and learn faster. While CI/CD and DevOps aren’t precisely the same, CI/CD is still a core component of DevOps automation. Continuous integration (CI) is a process that implements testing on every change, enabling users to see if their changes break anything in the environment. Continuous delivery (CD) is the practice of building software in a way that allows you to deploy any successful release candidate to production at any time. Continuous deployment (CD) takes continuous delivery a step further. With continuous deployment, every successful change is automatically deployed to production. Since some industries and technologies can’t immediately release new changes to customers (think hardware and manufacturing), adopting continuous deployment depends on your organization and product. Together, continuous integration and continuous delivery (commonly referred to as CI/CD) create a collaborative process for people to work on projects through shared ownership.


Challenges in operationalizing a machine learning system

Once data is gathered and explored, it is time to perform feature engineering and modeling. While some methods require strong domain knowledge to make sensible decisions feature engineering decisions, others can learn significantly from the data. Models such as logistic regression, random forest, or deep learning techniques are then run to train the algorithms. There are multiple steps involved here and keeping track of experiment versions is essential for governance and reproducibility of previous experiments. Hence, having both the tools and IDE around managing experiments with Jupyter notebook, scripts, and others is essential. Such tools require provisioning of hardware and proper frameworks to allow data scientists to perform their jobs optimally. After the model is trained and performing well, in order to leverage the output of this machine learning initiative, it is essential to deploy the model into a product whether that is on the cloud or directly “on the edge”. ... If you have large set inputs you would like to get the predictions on them without any immediate latency requirements, you can run batch inference in a regular cycle or with a trigger 


The CFO's guide to data management

"New technologies using machine learning, natural language processing, and advanced analytics can help finance leaders fix or work around many data problems without the need for large-scale investment and company-wide upheaval,'' Deloitte said. In fact, such technologies are already being used to help improve corporate-level forecasting, automate reconciliations, streamline reporting, and generate customer and financial insights, according to the firm. Why are CFOs getting involved in data management? "Business decisions based on insights derived from data are now critical to organizational performance and are becoming an essential part of a company's DNA," explained Victor Bocking, managing director, Deloitte Consulting LLP, in a statement. "CFOs and other C-level executives are getting more directly involved, partnering with their CIOs and CDOs [chief data officer] in leading the data initiatives for the parts of the business they are responsible for." As companies generate more and more data each day, finance teams have seemingly limitless opportunities to glean new insights and boost their value to the business. But doing that is easier said than done, the firm noted. The problem is the amount of data emanating daily from various sources can be overwhelming. Deloitte's Finance 2025 series calls this "the data tsunami." 


Can automated penetration testing replace humans?

To answer this question, we need to understand how they work, and crucially, what they can’t do. While I’ve spent a great deal of the past year testing these tools and comparing them in like-for-like tests against a human pentester, the big caveat here is that these automation tools are improving at a phenomenal rate, so depending on when you read this, it may already be out of date. First of all, the “delivery” of the pen test is done by either an agent or a VM, which effectively simulates the pentester’s laptop and/or attack proxy plugging into your network. So far, so normal. The pentesting bot will then perform reconnaissance on its environment by performing scans a human would do – so where you often have human pentesters perform a vulnerability scan with their tool of choice or just a ports and services sweep with Nmap or Masscan. Once they’ve established where they sit within the environment, they will filter through what they’ve found, and this is where their similarities to vulnerability scanners end. Vulnerability scanners will simply list a series of vulnerabilities and potential vulnerabilities that have been found with no context as to their exploitability and will simply regurgitate CVE references and CVSS scores.

 

'Credible threat': How to protect networks from ransomware

Ransomware attacks are becoming more rampant now that criminals have learned they are an effective way to make money in a short amount of time. Attackers do not even need any programming skills to launch an attack because they can obtain code that is shared among the many hacker communities. There are even services that will collect the ransom via Bitcoin on behalf of the attackers and just require them to pay a commission. This all makes it more difficult for the authorities to identify an attacker.Many small and medium-size businesses pay ransoms because they do not backup their data and do not have any other options available to recover their data. They sometimes face the decision of either paying the ransom or being forced out of business ... To prevent from becoming a ransomware victim, organizations need to protect their network now and prioritize resources. These attacks will only continue to grow, and no organization wants to be displayed by the media as being forced to pay a ransom. If you are forced to pay, customers can lose trust in your organization’s ability to secure their personal data and the company can see decreases in revenue and profit.


4 Types Of Exploits Used In Penetration Testing

Stack Based Exploits - This is possibly the most common sort of exploit for remotely hijacking the code execution of a process. Stack-based buffer overflow exploits are triggered when the data above the stack space has been filled out. The stack refers to a chunk of the process memory or a data structure that operates LIFO (Last in first out). The attackers can try to force some malicious code on the stack, which may redirect the program’s flow and perform the malicious program that the attacker intends to implement. The attacker does this by overwriting the return pointer so that the flow of control is passed to malicious code. Integer Bug Exploits - Integer bugs occur due to programmers not foreseeing the semantics of C operations, which are often found and exploited by threat actors. The difference between integer bugs and other exploitation types is that they are often exploited indirectly. Likewise, the security costs of integer bugs are profoundly critical. Since integer bugs are triggered indirectly, it enables an attacker to compromise other aspects of the memory, securing control over an application. Even if you resolve malloc errors, buffer overflows, or even format string bugs, many integer vulnerabilities would still be rendered exploitable.


AI-Enabled DevOps: Reimagining Enterprise Application Development

AI and ML play a key role in accelerating digital transformation across use cases – from data gathering and management to analysis and insight generation. Enterprises that have adopted AI and ML effectively are better positioned to enhance productivity and improve the customer experience by swiftly responding to changing business needs. DevOps teams can leverage AI for seamless collaboration, incident management, and release delivery. They can also quickly iterate and personalize application features via hypothesis-driven testing. For instance, Tesla recently enhanced its cars’ performance through over-the-air updates without having to recall a single vehicle. Similarly, periodic performance updates to biomedical devices can help extend their shelf-life and improve patient care significantly. These are just a few examples of how AI-enabled DevOps can foster innovation to drive powerful outcomes across industries. DevOps teams can innovate using the next-gen, cost-effective AI and ML capabilities offered by major cloud providers like AWS, Microsoft Azure, and Google Cloud. They offer access to virtual machines with all required dependencies to help data scientists build and train models on high power GPUs for demand and load forecasting, text/audio/video analysis, fraud prevention, etc.


What the IoT Cybersecurity Improvement Act of 2020 means for the future of connected devices

With a constant focus on innovation in the IoT industry, oftentimes security is overlooked in order to rush a product onto shelves. By the time devices are ready to be purchased, important details like vulnerabilities may not have been disclosed throughout the supply chain, which could expose and exploit sensitive data. To date, many companies have been hesitant to publish these weak spots in their device security in order to keep it under wraps and their competition and hackers at bay. However, now the bill mandates contractors and subcontractors involved in developing and selling IoT products to the government to have a program in place to report the vulnerabilities and subsequent resolutions. This is key to increasing end-user transparency on devices and will better inform the government on risks found in the supply chain, so they can update guidelines in the bill as needed. For the future of securing connected devices, multiple stakeholders throughout the supply chain need to be held accountable for better visibility and security to guarantee adequate protection for end-users.



Quote for the day:

"The great leaders have always stage-managed their effects." -- Charles de Gaulle