Daily Tech Digest - September 30, 2020

Zerologon Attacks Against Microsoft DCs Snowball in a Week

“This flaw allows attackers to impersonate any computer, including the domain controller itself and gain access to domain admin credentials,” added Cisco Talos, in a writeup on Monday. “The vulnerability stems from a flaw in a cryptographic authentication scheme used by the Netlogon Remote Protocol which — among other things — can be used to update computer passwords by forging an authentication token for specific Netlogon functionality.” ... Microsoft’s patch process for Zerologon is a phased, two-part rollout. The initial patch for the vulnerability was issued as part of the computing giant’s August 11 Patch Tuesday security updates, which addresses the security issue in Active Directory domains and trusts, as well as Windows devices. However, to fully mitigate the security issue for third-party devices, users will need to not only update their domain controllers, but also enable “enforcement mode.” They should also monitor event logs to find out which devices are making vulnerable connections and address non-compliant devices, according to Microsoft. “Starting February 2021, enforcement mode will be enabled on all Windows Domain Controllers and will block vulnerable connections from non-compliant devices,” it said.

Programming languages: Java founder James Gosling reveals more on Java and Android

Object-oriented programming was also an important concept for Java, according to Gosling. "One of the things you get out of object-oriented programming is a strict methodology about what are the interfaces between things and being really clear about how parts relate to each other." This helps address situations when a developer tries to "sneak around the side" and breaks code for another user.  He admits he upset some people by preventing developers from using backdoors. It was a "social engineering" thing, but says people discovered that restriction made a difference when building large, complex pieces of software with lots of contributors across multiple organizations. It gave these teams clarity about how that stuff gets structured and "saves your life". He offered a brief criticism of former Android boss Andy Rubin's handling of Java in the development of Android. Gosling in 2011 had a brief stint at Google following Oracle's acquisition of Sun. Oracle's lawsuit against Google over its use of Java APIs is still not fully settled after a decade of court hearings.  "I'm happy that [Google] did it," Gosling said, referring to its use of Java in Android. "Java had been running on cell phones for quite a few years and it worked really, really well. ..."

Prepare Your Infrastructure and Organization for DevOps With Infrastructure-as-Code

To understand infrastructure as code better, let’s look at what happened when cars became ubiquitous here in the US. Before cars, the railroad system ruled it all. Trains running on extremely well-defined, regimented schedules carried passengers and goods, connected people and places using the mesh of railroads that crisscrossed the country1. Cars democratized transport, allowing us to use our own vehicles on schedules convenient to us. To support this, a rich ecosystem of gas stations, coffee shops, restaurants and rest areas cropped up everywhere as a support system. Most importantly, the investment in the US road system paved the way (pun intended) for a network of freeways, highways and city roads that now carry a staggering 4 trillion passenger-miles of traffic each year, compared to a meager 37 billion passenger-miles carried by railroads2. We are in the midst of a similar revolution in application architectures. Applications are evolving from the railroad mode (monolithic architectures deployed and managed in centralized, regimented ways, following a waterfall model of project management), to the road system mode (micro-services architectures with highly interconnected components, deployed and managed by small teams following DevOps practices).

The lifecycle of a eureka moment in cybersecurity

The cybersecurity industry is saturated with features passing themselves off as platforms. While the accumulated value of a solution’s features may be high, its core value must resonate with customers above all else. More pitches than I wish to count have left me scratching my head over a proposed solution’s ultimate purpose. Product pitches must lead with and focus on the solution’s core value proposition, and this proposition must be able to hold its own and sell itself. Consider a browser security plugin with extensive features that include XSS mitigation, malicious website blocking, employee activity logging and download inspections. This product proposition may be built on many nice-to-have features, but, without a strong core feature, it doesn’t add up to a strong product that customers will be willing to buy. Add-on features, should they need to be discussed, ought to be mentioned as secondary or additional points of value. Solutions must be scalable in order to reach as many customers as possible and avoid price hikes with reduced margins. Moreover, it’s critical to factor in the maintenance cost and “tech debt” of solutions that are environment-dependent on account of integrations with other tools or difficult deployments.

Why data security has never been more important for healthcare organisations

The first step is to adopt a ‘zero-trust approach’, meaning that every single access request by a user should require their identity to be appropriately verified. Of course, to avoid users having to enter their username/password over and over again, this approach should be risk-weighted so that less important access requires less interventionist verification, for instance, using contextual signals like the location of the user or device characteristics. There is no longer a trade-off to be made between security and convenience – access to data and systems can be easy, simple and safe. This approach allows an organisation to always answer yes to: “Am I appropriately sure this person is who they say they are?” It is a philosophy which should be applied to internal and external users: a crucial fact given healthcare data’s risk profile. The second step for healthcare organisations is to consider eliminating the standard username/password authentication method and embrace modern, intelligent authentication. This delivers a combination of real-time context-based authentication and authorisation that seamlessly provide the appropriate level of friction based on the actions being taken by a service user.

Do You Need a Chief Data Scientist?

The specific role that a Chief Data Scientist plays depends on how the organization is applying data science, and where it falls on the build-versus-buy spectrum. Here, it’s important to differentiate between an organization that is creating a for-sale product or service that includes machine learning as a core feature, or whether it’s looking to use machine learning or data science capabilities for a product or service that’s used internally. Anodot, which creates and sells software that uses machine learning models to analyzing time-series data, is a good example of an organization building an external product with machine learning as a core feature. Cohen leads a team of data scientist in building all of the machine learning capabilities that are available in the Anodot product. On the other hand, there are organizations that are using machine learning capabilities to create a product that is used internally, or for data science services. In these types of organizations, the Chief Data Scientist, with her deep experience, is best equipped to answer these tough questions, Cohen says. “I think companies should build it themselves if they’re going to sell it, or if it’s a mission critical application,” Cohen says. “But it has to be mission critical. Otherwise, why bother?”

Should you upgrade tape drives to the latest standard?

There are three reasons that could justify upgrading your tape drive. The first would be if you have a task that uses large amounts of tape on a regular basis and upgrading to a faster tape drive would increase the speed of that process. For example, it might make sense for a movie producer using cameras that produce petabytes of data a day who want to create multiple copies and send them to several post-production companies. Copying 1PB to tape takes 22 hours at LTO-7 speeds, and LTO-9 would roughly halve that time. (The three companies behind the standard have not advertised the speed part of the spec yet, but it should be somewhere around 1200-1400 MB/s.) If the difference between 22 and 11 hours changes your business, then by all means upgrade to LTO-9. Second, LTO-9 offers a 50% capacity increase over LTO-8 and a 200% capacity increase over LTO-7. If you are currently paying by the tape for shipping your tapes or storing them in a vault, a financial argument could be made for upgrading to LTO-9 and copying all of your existing tapes to newer, bigger tapes. You might be able to significantly reduce those monthly costs if you’re using LTO-8 tapes and reduce them even more if you’re using LTO-7.

Archive as a service: What you need to know

Before the advent of cloud service providers, magnetic tapes primarily stored archive data in environmentally clean and physically secure facilities, such as those still offered by companies like Iron Mountain. As time progressed, organizations also stored archived data on rotating hard drives, fiber optic storage and solid-state disks. Of great importance to IT managers is the cost for data storage, and the good news is that advances in storage technology -- especially, as provided by cloud-based data archiving companies, as well as collocation-based archiving providers -- have helped reduce the cost for archival storage. ... Your organization should establish ground rules in its use of archive as a service for what gets stored, where storage occurs, how data is stored, the duration of storage and special data requirements such as deduplication and formatting. Perform the necessary due diligence to ensure that you can securely transmit your data to the archive location. Also, make sure the archiving provider can encrypt the data in transit and at rest, and ensure the storage location is fully secure and can minimize unauthorized access to archived data. You must carefully research key parameters -- data transmission media, data security capabilities, data integrity and data protection resources -- for all potential third-party vendors.

Three Steps To Manage Third-party Risk In Times Of Disruption

After a risk assessment has been carried out, organisations must ensure that a risk strategy is built into all service-level agreements and constantly monitor their third-party partners for new risks that may arise, including further down the supply chain. This includes monitoring the third-party’s performance metrics and internal control environment and collecting any relevant supporting documentation on an ongoing basis. In doing so, such information can inform risk strategy across the business and help companies identify issues before they arise. By monitoring these relationships on an ongoing basis, IT teams have wider visibility into the risk landscape and can minimise the likelihood of issues down the line. ... If a large number of third parties are used by the company, it can be hard for IT teams to keep track. Third-party relationships are often managed in silos across different areas of the business, each of which may have a unique way of identifying and managing them. This makes it increasingly difficult for management teams to get an accurate overview of third-party risk and performance across the business. 

Java is changing in a responsible manner

The world around us is changing. You know, the first thing that got me excited about Java was applets. We did not even know that Java would thrive on the server side; that came much later. But today we are in a very different world. Back then, we did not have big data, we didn’t have smart devices, we didn’t have functions as a service, and we didn’t have microservices. If Java didn’t adapt to the new world, it would have gone extinct. I started with Java fairly early on, and it’s absolutely phenomenal and refreshing to know that I am now programming with the next generation of programmers. The desires and needs and expectations of the next generation are not the same as those of my generation. Java has to cater to the next generation of programmers. This is a perfect storm for the language: On one hand, Java is popular today. On the other hand, Java must stay relevant to the changing business environment, changing technology, and changing user base. And we are going to make this possible. After 25 years, Java is not the same Java. It’s a different Java, and that’s what excites me about it.

Quote for the day:

"Enthusiasm is the greatest asset in the world. It beats money, power and influence." -- Henry Chester

Daily Tech Digest - September 29, 2020

The rise of remote work can be unexpectedly liberating

Employees could become increasingly mercenary, no longer swayed by the strong social bonds and physical-world perks of the office of the past. For their part, employers could increasingly view their staffs as little more than interchangeable work units. As a manager, no matter how objective I think I may be, I would probably find it easier to fire an employee with whom I had little personal connection. That difficult conversation would be reduced to a few minutes on a screen, with no chance of running into the person later in the coffee room. All of this may sound dismal, but this change in employee psychology and loyalty may come with an unexpected liberation, encouraging workers to look beyond the workplace to build friendships and identity. In our previous office lives, some of us had access to free food, coffee rooms or other on-site perks. We might have enjoyed them, but they also helped keep us in the office for long hours. Likewise, the presence of co-workers and bosses made us more compliant, less likely to take a proper lunch hour or make the effort to attend a child’s school event. With our offices gone, our days have now opened up. Why not make that doctor’s appointment for 4 p.m.? Why not pick the kids up at day care rather than find a babysitter?

Hardware security: Emerging attacks and protection mechanisms

Every hardware device has firmware – a tempting attack vector for many hackers. And though the industry has been making advancements in firmware security solutions, many organizations are still challenged by it and don’t know how to adequately protect their systems and data, she says. She advises IT security specialists to be aware of firmware’s importance as an asset to their organization’s threat model, to make sure that the firmware on company devices is consistently updated, and to set up automated security validation tools that can scan for configuration anomalies within their platform and evaluate security-sensitive bits within their firmware. “Additionally, Confidential Computing has emerged as a key strategy for helping to secure data in use,” she noted. “It uses hardware memory protections to better isolate sensitive data payloads. This represents a fundamental shift in how computation is done at the hardware level and will change how vendors can structure their application programs.” Finally, the COVID-19 pandemic has somewhat disrupted the hardware supply chain and has brought to the fore another challenge.

Still not dead: The mainframe hangs on, sustained by Linux and hybrid cloud

Others say technologies such as machine learning and artificial intelligence will also drive future mainframe development. “Data insights help drive actionable and profitable results—-but the pool of data is growing at astronomical rates. That’s where AI can make a difference, especially when it’s on a mainframe. Consider the amount of data that resides on a mainframe for an organization in the banking, manufacturing, healthcare, or insurance sectors. You’d never be able to make sense of it all without AI,” said Deloitte’s Cobb. As an example, Cobb said core banking operations can do more than simply execute large volumes of transactions. “Banks need deep insights about customer needs, preferences, and intentions to compete effectively, along with speed and agility in sharing and acting on those insights. That’s easier said than done when data is constantly changing. Now if you can analyze data directly on the mainframe, you can get near real-time insights and action. That makes the mainframe an important participant in the AI/ML revolution,” Cobb said.The mainframe environment isn’t without challenges going forward.

How AI can transform finance departments to help Covid-19 recovery

The modern world has made company spending less centralised than ever before, with employees spending money across so many expense categories and using more payment methods than ever before. This growth in the volume of financial data leads to an increase in the risk of fraud and noncompliance. This is a risk few businesses can take, especially when cash flow needs to be conserved. A study by the Association of Certified Fraud Examiners (ACFE) found that the average organisation loses 5% of its annual revenue to internal fraud. During an economic downturn, this is simply unsustainable. Much of this is accidental, with employees often mistakenly duplicating expense claims or invoices. Businesses are only able to audit around 10% of expense reports manually, so much potential fraud goes undetected. AI provides a solution to this problem, enabling the auditing of every single spend report. It can predict patterns and detect any anomalies that appear in financial data. Covid-19 has made it more important than ever that businesses are identifying any fraudulent activity and preventing it. Invoice fraud is one example that has seen an increase during the pandemic. 

Universal Health Services' IT Network Crippled

According to a post on Reddit by an individual who claims to work at a UHS facility in the Southeastern U.S., on Sunday at approximately 2 a.m., systems in the facility's emergency department "just began shutting down." The individual says: "I was sitting at my computer charting when all of this started. It was surreal and definitely seemed to propagate over the network. All machines in my department are Dell Win10 boxes." Anti-virus programs were disabled by the attack, and hard drives "just lit up with activity," the individual writes. "After one minute or so of this, the computers logged out and shutdown. When you try to power back on the computers they automatically just shut down. We have no access to anything computer based including old labs, EKGs, or radiology studies. We have no access to our PACS radiology system." Media outlet Bleeping Computer reports that an UHS insider says that during the incident, files were being renamed to include the .ryk extension. This extension is used by the Ryuk ransomware. Likewise, citing "people familiar with the incident," the Wall Street Journal reports that the attack did indeed involve ransomware.

The Shared Irresponsibility Model in the Cloud Is Putting You at Risk

The Shared Responsibility Model is pretty well understood now to mean: "If you configure, architect, or code it, you own the responsibility for doing that properly." While the relationship between the customer and the cloud is well understood, our experience working with software teams indicates the organization and architectural security responsibilities within organizations are not. And that is where the Shared Irresponsibility Model comes into play. When something goes wrong in the cloud — some form of security issue or incident —corporate management inevitably will come looking for the most senior person in the IT organization to blame. The IT organization and development teams might not have gone line by line through the various cloud providers' Shared Responsibility Models to entirely understand what is and isn't something they have to deal with. Developers are focused on developing and getting code running, typically with high rates of change. With the cloud, pushing code into production doesn't have many hurdles. The cloud provider is not responsible for an organization's own compliance, and, by default, it typically will not alert on misconfigurations that could introduce risk, either. 

Identity theft explained: Why businesses make tempting targets

Identity theft is most often associated with the act of stealing an individual's identity. But as Mitt Romney once famously said, "corporations are people, my friend," and businesses have all the sorts of "personal" data — tax ID numbers and bank accounts, for instance — that individuals have, which can be stolen and abused. We're not talking about security breaches or employees misusing corporate assets here; we're talking about an identity thief pretending to be someone within a company who has the authority to make financial transactions, just like they might pretend to be another individual. In fact, a business may be an even more tempting target for an identity thief than an individual because businesses have high credit limits, substantial bank accounts, and make big payments to vendors on a regular basis. The consequences can be dire, particularly for small businesses where the founder's or owner's finances are deeply entangled with the company's. Before we move on, we should take note of a couple of ways that even the theft of individuals' identities can affect businesses. For instance, one of the most pernicious effects of identity theft is just how much time victims have to spend calling credit agencies and financial institutions to resolve the issue; a recent study found that victims can take up to 175 hours to set everything straight

Using Nginx to Customize Control of Your Hosted App

Nginx is an open-source web server that is a world leader in load balancing and traffic proxying. It comes with a plethora of plugins and capabilities that can customize an application’s behavior using a lightweight and easy-to-understand package. According to Netcraft and W3Techs, Nginx serves approximately 31-36% of active websites, putting it neck and neck with Apache as the world’s preferred web server. This means that not only is it well-respected, trusted, performant enough for a large portion of production systems, and compatible with just about any architecture, it also has a loyal following of engineers and developers supporting the project. These are key factors in considering the longevity of your application, how portable it can be, and where it can be hosted. Let's look at a situation when you might need Nginx. In our example, you've created an app and deployed it on a Platform as a Service (PaaS)—in our case, Heroku. With PaaS, your life is easier, as decisions about the infrastructure, monitoring, and supportability have already been made for you, guaranteeing a clean environment for you to run your applications with ease.

The future of retail isn’t what it used to be

Appointment-based shopping is one key area of immediate opportunity. Initially seen in luxury and higher-end stores, appointment-based shopping balances safety, capacity, and personalized service. It can also serve two needs at once. For example, Best Buy uses appointments for more guided shopping with an advisor. For clothing retailers, appointment-based shopping can help customers schedule dressing room visits with the specific items they want to try. With the right digital capabilities, consumers can shop online, select items in various sizes, and schedule a time and room to visit a retailer to experience a personalized trial and fitting. Making the in-store shopping experience better should include planograms and the ability to look up assortments and stock in a store. Assortment differences from store-to-store mean that shoppers may go into a store looking for a product that a particular location does not stock. Home Depot and Target both do well in indicating if a product is in stock and where it’s located within the store. Contactless shopping is another area worth further focus. Self-checkout in retail has been available and increasing its footprint for some time.

Microsoft: Some ransomware attacks take less than 45 minutes

Per Microsoft, the most targeted accounts in BEC scams were the ones for C-suites and accounting and payroll employees. But Microsoft also says that phishing isn't the only way into these accounts. Hackers are also starting to adopt password reuse and password spray attacks against legacy email protocols such as IMAP and SMTP. These attacks have been particularly popular in recent months as it allows attackers to also bypass multi-factor authentication (MFA) solutions, as logging in via IMAP and SMTP doesn't support this feature. Furthermore, Microsoft says it's also seeing cybercrime groups that are increasingly abusing public cloud-based services to store artifacts used in their attacks, rather than using their own servers. Further, groups are also changing domains and servers much faster nowadays, primarily to avoid detection and remain under the radar. But, by far, the most disruptive cybercrime threat of the past year have been ransomware gangs. Microsoft said that ransomware infections had been the most common reason behind the company's incident response (IR) engagements from October 2019 through July 2020.

Quote for the day:

"Leadership is unlocking people's potential to become better." -- Bill Bradley

Daily Tech Digest - September 28, 2020

5 ways agile devops teams can support IT service desks

Devops teams should specifically tailor planning, release, and deployment communications or collaborations to their audiences. For service desk and customer support teams, communications should focus on how the release impacts end-users. Devops teams should also anticipate the impact of changes on end-users and educate support teams. When an application’s user experience or workflow changes significantly, bringing in support teams early to review, understand, and experience the changes themselves can help them update support processes. ... Let’s consider two scenarios. One devops team monitors their multicloud environments and knows when servers, storage, networks, and containers experience issues. They’ve centralized application logs but have not configured reports or alerts from them, nor have they set up any application monitors. More often then not, when an incident or issue impacts end-users, it’s the service desk and support teams who escalate the issue to IT ops, SREs (site reliability engineers), or the devops team. That’s not a good situation, but neither is the other extreme when IT operational teams configure too many systems and application alerts.

Safeguarding Schools Against RDP-Based Ransomware

Most school districts now acknowledge that things will not be back to normal this fall, and they are planning hybrid learning solutions for the school year. Hackers are delighted with this development since distance learning is often implemented using Microsoft's Remote Desktop Protocol (RDP), one of the prime targets for cybercriminals, aiming for quick gains. Their primary tactic: install ransomware that locks up data until ransoms are paid. Recently, in June 2020, the University of California San Francisco School of Medicine paid a ransom of over $1 million to regain access to important scientific data. While a K-12 school or school district may not have data worth millions, cybercriminals know that schools often lack the resources large corporations deploy to guard against cyberattacks, which makes them prime targets. One specific attack vector the FBI has warned about is Ryuk ransomware, which is deployed via RDP endpoints, specifically students, parents, and teachers in the K-12 environment. Ryuk uses a sophisticated type of data encryption that targets backup files. Once the end user has been infected, that person can propagate the virus to the school's servers, where it can cause havoc.

Arm swimming in a sea of uncertainty that could sink its business model

"The risk with Arm going forward is Arm works because I can source Arm IP, and I know that Arm will not compete with me. Some of Arm's other customers might compete with me, but my supplier will not compete with me because they do not sell chips," he said. "We're moving to a scenario now where there's a potential that if I'm sourcing IP from a company that will compete with me for product -- the selling of chips -- that's obviously going to cause concern for quite a few companies that may also raise antitrust or anti-competitive issues in terms of closing the deal as well." And this is before the situation with Arm China enters the equation. Arm China is a joint venture -- the style of arrangement many western companies enter into to do business in the Middle Kingdom -- and in July, Arm sought to fire the CEO of that venture, Allen Wu, for running another company that invested in Chinese Arm customers on the side. That would normally be a pretty straight forward case of conflict of interest, except Wu has Arm China's registration documents and company seal and he has not given them up, Bloomberg reported in July. Arm China also posted a public letter signed by 176 of its employees imploring Beijing to protect it from the UK parent company.

Why You Should Stop Saving Photos From iMessage, WhatsApp And Android Messages

Check Point’s POC attack was that an image would be messaged to a victim over a popular platform—iMessage, Android Messages or WhatsApp, and the content of the image would tempt the victim to save the photo to their device. It’s easily done—most of us do it all the time, even if just to share the image on a different platform, rather than forward the message we have received. Check Point’s Ekram Ahmed told me that this should serve as a warning. “Think twice before you save photos onto your device,” he told me, “as they can be a Trojan horse for hackers to invade your phone. We demonstrated this with Instagram, but the vulnerability can likely be found in other applications.” That’s almost certainly the case—the issue was with the deployment of an open-source image parsing capability buried within the Instagram app. And that third-party software library is widely installed in countless other apps. ... The issue comes when you save that to the album on your internal phone’s storage or an external disk. We saw this last year, with WhatsApp and Telegram exposed to an Android vulnerability where images were saved to an external disk. That said, earlier this year, Google’s Project Zero team warned that the image handling by messengers themselves on iOS could be defeated when an unusual file type was handled.

Why Data Intensity Matters in Today’s World

Data intensity won’t happen overnight. It’s a journey that brings together the right technology, best practices, and infrastructure foundation. The first step is to start with proven available technologies. Open Source offerings may tempt us with the latest technical bells and whistles, but they aren’t always the solution that aligns best with our business objectives. One reason that IT projects fail so often is that people choose the wrong technology. As you evaluate the tooling you will use with your data, consider whether you need some of the scale and complexity that comes with these technologies. Not every company is a Facebook or a Google. Choose the technology that lines up best to your own use case and your platform, not merely the flavor of the month. Don’t be afraid to purchase the technology and tools you need, rather than build it yourself. Maximizing data literacy is another key step toward data intensity. It starts with establishing a common way to talk about data, using a baseline set of knowledge, such as SQL. Understanding the data is more important than understanding the technology behind it.  Even the best solution won’t do you any good if you can’t bring it into production.

GCA releases new version of the GCA Cybersecurity Toolkit for SMBs

The GCA toolkit provides small businesses a way to address these risks with free tools and resources that they can implement themselves. For government and industry, the toolkit is a valuable resource that can be provided to help secure their supply chain and vendors. “Helping small businesses address cybersecurity challenges requires that we meet them where they are, with resources designed to match their resources and expertise. We worked with partners and stakeholders to develop the GCA Cybersecurity Toolkit for Small Business more than a year ago and since that time have evolved the toolkit to be even easier to use, either all at once or a step at a time,” said Philip Reitinger, GCA’s President and CEO. “This revision of the toolkit is a significant step forward on this front, and we are pleased to share it to further assist small businesses reduce cyber risk.” Since its initial launch there have been more than 105,000 visits to the toolkit. Key to the success of the toolkit has been partnerships with organizations such as Mastercard, ICTswitzerland, and the Swiss Academy of Engineering Sciences (SATW), the latter two of which resulted in the German translation of the toolkit and makes an important contribution to the implementation of the National strategy for Switzerland’s protection against cyber risks (NCS).

7 low-code platforms developers should know

Low-code platforms are far more open and extensible today, and most have APIs and other ways to extend and integrate with the platform. They provide different capabilities around the software development lifecycle from planning applications through deployment and monitoring, and many also interface with automated testing and devops platforms. Low-code platforms have different hosting options, including proprietary managed clouds, public cloud hosting options, and data center deployments. Some low-code platforms are code generators, while others generate models. Some are more SaaS-like and do not expose their configurations. Low-code platforms also serve different development paradigms. Some target developers and enable rapid development, integration, and automation. Others target both software development professionals and citizen developers with tools to collaborate and rapidly develop applications.  I selected the seven platforms profiled here because many have been delivering low-code solutions for over a decade, growing their customer bases, adding capabilities, and offering expanded integration, hosting, and extensibility options. Many are featured in Forrester, Gartner, and other analyst reports on low-code platforms for developers and citizen development.

9 Tips to Prepare for the Future of Cloud & Network Security

Discussions of cloud security are often complicated because different people have different ideas of what constitutes cloud computing and what their personal roles and interests are, Riley said. It's incumbent on organizations to focus their attention on aspects of cloud security they can control: identity permissions, data configuration, and sometimes application code. Most cloud security issues that organizations face fall under these three areas. "The volume of cloud usage is increasing, the sophistication is increasing, the complexity is increasing, [and] the challenge is learning how to better utilize the public cloud," Riley said. A growing dependence on the cloud will also force businesses to rethink the way they approach network security, said Lawrence Orans, research vice president at Gartner, in a session on the subject. The future of network security is in the cloud, and security teams must keep up. The changes related to cloud adoption extend to the security operations center, which analysts anticipate will take a different form as more businesses depend on the cloud, adopt cloud security tools, and support fully remote teams. These shifts will demand a change in thinking for security operations teams.

How Centralized Log Management Can Save Your Company

Dropping all logs into a SIEM spikes costs, so oftentimes only a portion is collected, which creates fragmented or incomplete pictures and impacts security monitoring and incident response. CLMs lift the burden of having to hire staff, provide training and support for SIEMs. CLMs also reduce the costs organizations would incur with their SIEM providers, as well as the risk of endangering the SIEM infrastructure by storing unmanaged logs. Fragmented data collection can become a unified data collection with a data highway. Organizations can now filter unruly data and deliver only what you need. This helps overcome the age-old strategy of letting separate teams have their own sources of data, which could instead be directed to the appropriate team via your data highway. The data highway lets you collect once and use it many times, where it’s needed. ... One example of superfluous information is the timed mark that many applications add into the log of their system to show they are online. Unless a security auditor will need to see this, there is no reason why an organization should be paying to store it in their SIEM. Administrators are even able to filter out all extraneous text and add parsing for specific events.

Applying Chaos Engineering in Healthcare: Getting Started with Sensitive Workloads

With critical systems, it can be a good idea to first run experiments in your dev/test type environments to minimize both actual and perceived risk. As you learn new things from these early experiments, you can explain to stakeholders that production is a larger and more complex environment which would further benefit from this practice. Equally, before introducing something like this in production, you want to be confident that you can have a safe approach that allows for you to be surprised with newer findings without introducing that additional risk. As a next step, consider running chaos experiments in a new production environment before it is handling live traffic by generating synthetic workloads. You get the benefit of starting to test some of the boundaries of the system in its production configuration, and it is easy for other stakeholders to understand how this will be applied and that it will not introduce added risks to customers, since live traffic isn’t being handled yet. To start introducing more realistic workloads than you can get from synthetic traffic, a next step may be to leverage your existing production traffic.

Quote for the day:

"Challenges in life always seek leaders and leaders seek challenges." -- Wayde Goodall

Daily Tech Digest - September 27, 2020

Programming Fairness in Algorithms

Machine learning fairness is a young subfield of machine learning that has been growing in popularity over the last few years in response to the rapid integration of machine learning into social realms. Computer scientists, unlike doctors, are not necessarily trained to consider the ethical implications of their actions. It is only relatively recently (one could argue since the advent of social media) that the designs or inventions of computer scientists were able to take on an ethical dimension. This is demonstrated in the fact that most computer science journals do not require ethical statements or considerations for submitted manuscripts. If you take an image database full of millions of images of real people, this can without a doubt have ethical implications. By virtue of physical distance and the size of the dataset, computer scientists are so far removed from the data subjects that the implications on any one individual may be perceived as negligible and thus disregarded. In contrast, if a sociologist or psychologist performs a test on a small group of individuals, an entire ethical review board is set up to review and approve the experiment to ensure it does not transgress across any ethical boundaries.

This Algorithm Doesn't Replace Doctors—It Makes Them Better

Operators of paint shops, warehouses, and call centers have reached the same conclusion. Rather than replace humans, they employ machines alongside people, to make them more efficient. The reasons stem not just from sentimentality but because many everyday tasks are too complex for existing technology to handle alone. With that in mind, the dermatology researchers tested three ways doctors could get help from an image analysis algorithm that outperformed humans at diagnosing skin lesions. They trained the system with thousands of images of seven types of skin lesion labeled by dermatologists, including malignant melanomas and benign moles. One design for putting that algorithm’s power into a doctor’s hands showed a list of diagnoses ranked by probability when the doctor examined a new image of a skin lesion. Another displayed only a probability that the lesion was malignant, closer to the vision of a system that might replace a doctor. A third retrieved previously diagnosed images that the algorithm judged to be similar, to provide the doctor some reference points.

Redefining Leadership In The Age Of Artificial Intelligence

Intelligent behaviour has long been considered a uniquely human attribute. But when computer science and IT networks started evolving, artificial intelligence and people who stood by them were on the spotlight. AI in today’s world is both developing and under control. Without a transformation here, AI will never fully deliver the problems and dilemmas of business only with data and algorithms. Wise leaders do not only create and capture vital economic values, rather build a more sustainable and legitimate organisation. Leaders in AI sectors have eyes to see AI decisions and ears to hear employees perspective. A futuristic AI leader plans to work not just for now but also for the years ahead. A company’s development in AI involves automating business processes using robotic technologies, gaining insight through data analysis and enhancement, cost-effective predictions based on algorithms and engagement with employees through natural language processing chatbots, intelligent agents and machine learning. Without a far-sighted leader, bringing all this to reality will be merely impossible.

Blockchain: En Route to the Global Supply Chain

In the context of a large-scale shipping operation, for instance, there may be thousands of containers filled with millions of packages or assets. Using a system that can track every asset with full certainty, any concerns can be eliminated about whether the items are where they are supposed to be, or if anything is missing. As blockchain expands, so too will the data it records, which in turn increases trust. By ensuring via this secured digital ledger that an asset has moved from a warehouse to a lorry on a Thursday afternoon, more data can then be added. For example, it can show that the asset moved from a specific shelf in a warehouse on a specific street and was moved by a specific truck operated by a specific driver. Securing the location data with full trust provides assurance that things are happening correctly and means that financial transactions can be made with more confidence. Layering mapping capabilities and rich location data to a blockchain record also enables fraud detection. Without blockchain, it cannot be certain that the delivery updates provided are in fact accurate. Blockchain makes transactions transparent and decentralised, enabling the possibility to automatically verify their accuracy by matching the real location of an item with the location report from a logistics company.

A closer look at Microsoft Azure Arc

At Ignite, Microsoft provided its answer on how Azure Arc brings cloud control on premises. The cornerstone of Azure Arc is Azure Resource Manager, the nerve center that is used for creating, updating, and deleting resources in your Azure account. That encompasses allocating compute and storage to specific workloads and then monitoring performance, policy compliance, updates and patches, security status, and so on. You can also fire up and access Azure Resource Manager through several paths ranging from the Azure Portal to APIs or command line interface (CLI). It provides a single pane of glass for indicating when specific servers are out of compliance; specific VMs are insecure; or certificates or specific patches are out of date – and it can then show recommended remedial actions for IT and development teams to take. While it requires at least some connection to the Azure Public Cloud, it can run offline when the network drops. Microsoft has built a lot of flexibility as to the environments that Azure Arc governs. It can be used for controlling bare metal environments as well as virtual machines running on any private or public cloud, SQL Server, or Kubernetes (K8s) clusters.

Why haven’t we ‘solved’ cybersecurity?

Cybersecurity-related incentives are misaligned and often perverse. If you had a real chance to become a millionaire or even a billionaire by ignoring security and a much smaller chance if you slowly baked in security, which path would you choose? We also fail to account for, and sometimes flat out ignore, the unintended consequences and harmful effects of the innovative technology and ideas we create. Who would have thought that a 2003 social media app, built in a dorm room, would later help topple governments and make the creator one of the richest people in the world? Cybersecurity companies and individual experts face the difficult challenge of balancing personal gain versus the greater good. If you develop a new offensive tool or discover a new vulnerability, should you keep it secret or make a name for yourself through disclosure? Concerns over liability and competitive advantage inhibit the sharing of best practices and threat information that could benefit the larger business ecosystem. Data has become the coin of the realm in the modern age. Data collection is central to many business models, from mature multi-national companies to new start-ups. Have a data blind spot?

Top Technologies To Achieve Security And Privacy Of Sensitive Data In AI Models

Differential privacy is a technique for sharing knowledge or analytics about a dataset by drawing the patterns of groups within the dataset and at the same time reserving sensitive information concerning individuals in the dataset. The concept behind differential privacy is that if the effect of producing an arbitrary single change in the database is small enough, the query result cannot be utilised to infer much about any single person, and hence provides privacy. Another way to explain differential privacy is that it is a constraint on the algorithms applied to distribute aggregate information on a statistical database, which restricts the exposure of individual information of database entries. Fundamentally, differential privacy works by adding enough random noise to data so that there are mathematical guarantees of individuals’ protection from reidentification. This helps in generating the results of data analysis which are the same whether or not a particular individual is included in the data. Facebook has utilised the technique to protect sensitive data it made available to researchers analysing the effect of sharing misinformation on elections. Uber employs differential privacy to detect statistical trends in its user base without exposing personal information.

Getting Started with Mesh Shaders in Diligent Engine

Originally, hardware was only capable of performing a fixed set of operations on input vertices. An application was only able to set different transformation matrices (such as world, camera, projection, etc.) and instruct the hardware how to transform input vertices with these matrices. This was very limiting in what an application could do with vertices, so to generalize the stage, vertex shaders were introduced. Vertex shaders were a huge improvement over fixed-function vertex transform stage because now developers were free to implement any vertex process algorithm. There was however a big limitation - a vertex shader takes exactly one vertex as input and produces exactly one vertex as output. Implementing more complex algorithms that would require processing entire primitives or generate them entirely on the GPU was not possible. This is where geometry shaders were introduced, which was an optional stage after the vertex shaders. Geometry shader takes the whole primitive as an input and may output zero, one or more primitives. 

Need for data management frameworks opens channel opportunities

Today's huge influx of data is resulting in multiple inefficiencies, according to Mike Sprunger, senior manager of cloud and network security at Insight Enterprises, a global technology solution provider. He cited the example of an employee who generates a spreadsheet and shares it with half a dozen co-workers, who then send the spreadsheet to half a dozen others. The 1 MB file morphs into 36 MBs, and when that information is backed up, data volumes double again. As cloud and flash technologies lowered storage pricing dramatically, many companies simply added more storage capacity as data demands grew. While companies stored more, they purged less. Furthermore, industry and government rules and guidelines for maintaining data have been evolving. So it can be unclear how to meet regulatory requirements, Sprunger noted, and decide what data can go and what must be kept. Compounding the challenge, communication between IT and business units is often mediocre or nonexistent. So neither group understands the business requirements or the technical possibilities of deleting outdated data, he added.

RASP 101: Staying Safe With Runtime Application Self-Protection

Feiman says solutions like RASP and WAF have emerged from "desperation" to protect application data but are insufficient. The market needs a technology that is focused on detection rather than prevention. Indeed, in an effort to address the problems with RASP, he and his team at WhiteHat are in the process of beta testing an application security technology that performs app testing without instrumentation. As far as existing RASP technologies go, it's unlikely they'll stick around in their current form. Rather than an independent technology, Feiman believes RASP will ultimately get absorbed into application runtime platforms like the Amazon AWS and Microsoft Azure cloud platforms. This could happen through a combination of acquisitions and companies like AWS building their own lightweight RASP capabilities into their technologies. "The idea will stay, the market hardly will," says Feiman. On that, Sqreen's Aviat disagrees, saying RASP is "indeed a standalone technology." "I expect RASP to become a crucial element of any application security strategy, just like WAF or SCA is today – in fact, RASP is already referenced by NIST as critical to lowering your application security risk," he said.

Quote for the day:

"The leader has to be practical and a realist, yet must talk the language of the visionary and the idealist." -- Eric Hoffer

Daily Tech Digest - September 26, 2020

Steering Wealth Management Industry Through Digital Transformation In The Post Pandemic World

Implement ready to use digital solutions and change internal processes instead of starting from scratch to build solutions to cater to its processes. Don’t shy from exploring global solutions, you would most likely get a great product which may not be expensive. Insist on following the Methodology of “Pay as you Use or Pay as you Grow” instead of incurring significant implementation charges and license fees.  Explore working with StartUps who are hungry for businesses and will go out of the way to build great solutions. A robust database for sending relevant, targeted and personalized communications  Make a beginning and take baby steps. Focus on 90% of your requirements. Lot of time and energy is spent on addressing 10% of requirements which can be done manually or there could be a work around We are at the cusp of a brave, new world that demands self-sufficiency, and it is becoming rapidly clear that greater digital freedom will play a pivotal role in making the Industry more effective, scalable and enduring on this uncharted road ahead. Firms that deploy these tools fast will attract clients and survive. The Industry has always been one to shy away from digital transformation.

Layered security becomes critical as malware attacks rise

The scam script Trojan.Gnaeus made its debut at the top of WatchGuard’s top 10 malware list for Q2, making up nearly one in five malware detections. Gnaeus malware allows threat actors to hijack control of the victim’s browser with obfuscated code, and forcefully redirect away from their intended web destinations to domains under the attacker’s control. Another popup-style JavaScript attack, J.S. PopUnder, was one of the most widespread malware variants last quarter. In this case, an obfuscated script scans a victim’s system properties and blocks debugging attempts as an anti-detection tactic. To combat these threats, organizations should prevent users from loading a browser extension from an unknown source, keep browsers up to date with the latest patches, use reputable adblockers and maintain an updated anti-malware engine. XML-Trojan.Abracadabra is a new addition to the top 10 malware detections list, showing a rapid growth in popularity since the technique emerged in April. Abracadabra is a malware variant delivered as an encrypted Excel file with the password “VelvetSweatshop”, the default password for Excel documents.

Want diversity? Move beyond your closed network

In earnest, the difficulty of recruiting diverse candidates reflects the fact that the networks the banking industry typically relies upon to attract and recruit talent do not reach diverse pools of talented candidates. This network gap is insidious too, leading to a lack of diversity in other aspects of business, like vendor procurement and investment. Once, Mitt Romney spoke of “binders full of women” when running for president. While his wording was inartful, he seemed to recognize that he needed to make a deliberate effort to build his network of talented women in order to be able to appoint numbers of qualified women. So, what deliberate steps can banks take to close the network gap and find talented people of color? Here are a few things any bank can do to turn intention into impact, and close the network gap. Begin with reflection: Why are you not tied to diverse networks? Do you know where to find black and brown civil society? Learning why your company may not be a cultural fit for certain demographics is nothing new for banks. Gender is probably the most recent example. Understanding that women bring different and needed experience to leadership creates an impetus for more diversity.

Why No One Understands Enterprise Architecture & Why Technology Abstractions Always Fail

The first step is demystification. All of the abstract terms – even the word “architecture” – should be modified or replaced with words and phrases that everyone – especially non-technology executives – can understand. Enterprise planning or Enterprise Business- Technology Strategy might be better, or even just Business-Technology Strategy (BTS). Why? Because “Enterprise Architecture” is nothing more than an alignment exercise, alignment between what the business wants to do and how the technologists will enable it now and several years out. It’s continuous because business requirements constantly change. At the end of the day, EA is both a converter and a bridge: a converter of strategy and a bridge to technology. The middle ground is the Business-Technology Strategy. EA – or should I say “Business Technology Strategy” – isn’t strategy’s first cousin, it’s the offspring. EA only makes sense when it’s derived from a coherent business strategy. For technology companies, that is, companies that sell technology-based products and services – the role of EA is easier to define. Who doesn’t want to help technology (AKA “engineering”) – the ones who build the products and services – build the right applications with the right data on the right infrastructure?

Types of Apps that can be built with Angular Framework

Undoubtedly, Angular development is almost everywhere after it was released in 2009. A few years back, Angular development services are on great boom. Angular is considered the best framework for developing web, single-page, and mobile applications. The Angular framework has impressive features; the developers and enterprise website owners pretty much like it. Even most of the developers shifted their technology to angular. Before knowing why angular mobile app development and what sort of applications can be developed using an angular framework, let’s first dive into the topic of what exactly Angular framework is? Angular is a JavaScript-based framework from the family of Google. The angular framework was developed by Google’s developers to create dynamic web applications. Angular is a full-fledged framework used for the frontend development of an application. Angular has a lot to give to your web and mobile application. Angular will not only create an impressive UI for your application but also provide features like high performance and user-friendly. As a feature-rich framework, Angular provides a vast number of features for web application developers.

WebAssembly Could Be the Key for Cloud Native Extensibility

Google had been championing the idea of making WebAssembly a common runtime for Envoy, as a way to help its own Istio service mesh, of which Envoy is a major component. WASM is faster than JavaScript and, because it runs in a sandbox (a virtual machine), it is secure and portable. Perhaps best of all, because it is very difficult to write assembly-like WASM code, many parties created translators for other languages — allowing developers to use their favored languages such as C and C++, Python, Go, Rust, Java, and PHP. Google and the Envoy community also rallied around building a WebAssembly System Interface (WASI), which serves as the translation layer between the WASM and the Envoy filter chain. Still, the experience of building Envoy modules wasn’t packaged for developers, Levine thought at the time. There was still a lot of plumbing to add, settings for Istio and the like. ““Google is really good at making infrastructure tooling. But I’d argue they’re not the best at making their user experience,” Levine said. And much like Docker customized the Linux LXC — pioneered in large part by Google — to open container technology to more developers, so too could the same be done with WASM/WASI for Envoy, Levine argues.

Amazon's robot drone flying inside our homes seems like a bad idea

Amazon says you can specify a flight path, map your house, locate points of interest, and generally instruct the eye of Skynet where to fly. Cyberdyne, uh, Amazon also says the device has built in obstacle avoidance. Let's think about that for a minute. Will the device be able to avoid hanging lamps or plants? What about objects high up on shelves? Will it be able to stand back when a sleep-addled adult gets up in the middle of the night to do middle of the night business? Why would it be out and about at that time anyway? And what about the downdraft? How close can it fly to bookshelves and knickknacks without air-blasting them to the ground? How much will it freak out your pets? My spouse? Your spouse? Just how creepy would it be for it to hover over the kids beds because you're too lazy to get off the couch to see if they're asleep? Every rational fiber of my being tells me this is wrong on every level. ... The Always Home Cam is primarily meant as a remote security cam. If you're out and you get an alert from a Ring doorbell or other security device (I wonder if this will work with other trigger devices), you can virtually fly around your house and see what's happening.

Project InnerEye open-source deep learning toolkit: Democratizing medical imaging AI

Project InnerEye has been working closely with the University of Cambridge and Cambridge University Hospitals NHS Foundation Trust to make progress on this problem through a deep research collaboration. Dr. Raj Jena, Group Leader in machine learning and radiomics in radiotherapy at the University of Cambridge, explains, “The strongest testament to the success of the technology comes in the level of engagement with InnerEye from my busy clinical colleagues. For over 15 years, the promise of automated segmentation of images for radiotherapy planning has remained unfulfilled. With the InnerEye ML model we have trained on our data, we now observe consistent segmentation performance to a standard that matches our stringent clinical requirements for accuracy.” The goal of Project InnerEye is to democratize AI for medical image analysis and empower developers at research institutes, hospitals, life science organizations, and healthcare providers to build their own medical imaging AI models using Microsoft Azure. So to make our research as accessible as possible, we are releasing the InnerEye Deep Learning Toolkit as open-source software.

How to Strengthen the Pillars of Data Analytics for Better Results

Data analysts and business analysts rely heavily on a fit-for-purpose data environment that enables them to do their jobs well. These environments allow them to answer questions from management and different parts of the business. These same professionals have expertise in working and communicating with data but often do not have deep technical knowledge of databases and the underlying infrastructure. For instance, they may be familiar with SQL and bringing together data sources in a simple data model that allows them to dig deeper in their analysis, but when the database performance degrades during more complex analysis, the depth of infrastructure reliance becomes clear. The dreaded spinner wheel or delays in analysis make it difficult to meet business needs and demands. This can impact critical decision making and reveal underlying weaknesses that get in the way of other data applications, such as artificial intelligence (AI). These indicators of poor performance also show the need for scaling the data environment to accommodate the growth of data and data sources.

The Role of Data Management in Advancing Biology

I think FAIR has really codified a way of thinking about data that's incredibly aspirational and resonates with people. One of the biggest challenges we're facing in this field right now is findability of the data—search is a hard problem. Then let's say you manage to find some data that you're very interested in; a lot of the time it's not clear whether or not those data are accessible to you or to the public. There's been a large push over the last decade to make everything reproducible, to make the data accessible, to have a data management plan. A lot of that effort isn't necessarily resourced, so just because you have a data management plan doesn't mean that you have a clear place where you can actually put data. We're lucky that the Sequence Read Archives exist and that the NIH continues to fund it, because that's become one of these major focal points for collecting the data. But even more than that, when you're in the middle of collecting data for a very specific question, you're not necessarily thinking about what other information to collect to make these data useful to other groups or other labs. That's not a part of the thought experiment that you're going through in that moment.

Quote for the day:

"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks

Daily Tech Digest - September 25, 2020

Polish police shut down hacker super-group 

According to reports in Polish media, the hackers have been under investigation since May 2019, when they sent a first bomb threat to a school in the town of Łęczyca. Investigators said that an individual named Lukasz K. found the hackers on internet forums and hired them to send a bomb threat to the local school, but make the email look like it came from a rival business partner. The man whose identity was spoofed in the email was arrested and spent two days in prison before police figured out what happened. ... Investigators said that when the hackers realized what was happening, they then hacked a Polish mobile operator and generated invoices for thousands of zlotys (the Polish currency) in the name of both the detective and the framed businessman. ... Investigators said that from infected users, the hackers would steal personal details, which they'd use to steal money from banks with weak security. In case some banks had implemented multiple authentication mechanisms, the group would then use the information they stole from infected victims to order fake IDs from the dark web, and then use the IDs to trick mobile operators into transferring the victim's account to a new SIM card.

All the Way from Information Theory to Log Loss in Machine Learning

In 1948, Claude Shannon introduced the information theory in his 55-page-long paper called “A Mathematical Theory of Communication”. The information theory is where we start the discussion that will lead us to the log loss which is a widely-used cost function in machine learning and deep learning models. The goal of the information theory is to efficiently deliver messages from a sender to a receiver. In the digital age, the information is represented by bits, 0 and 1. According to Shannon, one bit of information sent to the recipient means to reduce the uncertainty of the recipient by a factor of two. Thus, information is proportional to the uncertainty reduction. Consider the case of flipping a fair coin. The probability of heads being the side facing up, P(Heads), is 0.5. After you (the recipient) are told that the heads is up, P(Heads) becomes 1. Thus, 1 bit of information is sent to you and the uncertainty is reduced by a factor of two. The amount of information we get is the reduction in uncertainty which is the inverse of the probability of events. The number of bits of information can easily be calculated by taking log (base2) of the reduction in uncertainty.

From adoption to understanding: AI in cyber security beyond Covid-19

Businesses have begun to recognise the promise of AI / ML, and as cyber attacks continue to increase globally, more are adopting these advanced tools to protect themselves. In a survey we conducted among IT decision-makers across the United States and Japan back in 2017, we discovered 74% of businesses in both regions were already using some form of AI or ML to protect their organisations from cyber threats. In our most recent report published this year, we took the pulse of 800 IT professionals with cyber security decision-making power across the US, UK, Japan, Australia and New Zealand. In the process, we discovered that 96% of respondents now use AI/ML tools in their cyber security programs – a significant increase from three years ago! But we weren’t expecting to uncover a pervasive lack of awareness around the benefits of these technologies. Despite the increase in adoption rates for these technologies, our most recent survey found that more than half of IT decision-makers admitted they do not fully understand the benefits of these tools. Even more jarring was that 74% of IT decision-makers worldwide don’t care whether they’re using AI or ML, as long as the tools they use are effective in preventing attacks.

COVID-19 widens the digital innovation gap

"Our findings point to an overconfidence on the part of business leaders that their CMS has the necessary functions to support omnichannel and content orchestration, while builders say they feel disempowered and frustrated." One telling stat the study found is that only 34% of content creators said they can control all the content across digital channels without developer assistance, while 74% of digital leaders think their CMS enables this, Contentful said. Additionally, two-thirds of business leaders believe they are behind competitors in delivering new digital experiences, the company said. "They struggle with maintaining content and brand consistency across channels, hiring qualified talent, juggling multiple systems, and managing a mountain of existing content while simultaneously building more, more, more." Eighty-three percent of respondents believe customers expect an omnichannel digital experience and 88% think brand consistency across these experiences is important, the study said. "This aligns with industry research that shows consistent, connected digital experiences are important throughout the customer lifecycle."

Set up continuous integration for .NET Core with OpenShift Pipelines

Have you ever wanted to set up continuous integration (CI) for .NET Core in a cloud-native way, but you didn’t know where to start? This article provides an overview, examples, and suggestions for developers who want to get started setting up a functioning cloud-native CI system for .NET Core. We will use the new Red Hat OpenShift Pipelines feature to implement .NET Core CI. OpenShift Pipelines are based on the open source Tekton project. OpenShift Pipelines provide a cloud-native way to define a pipeline to build, test, deploy, and roll out your applications in a continuous integration workflow. ... You will need cluster-administrator access to an OpenShift instance to be able to access the example application and follow all of the steps described in this article. If you don’t have access to an OpenShift instance, or if you don’t have cluster-admin privileges, you can run an OpenShift instance locally on your machine using Red Hat CodeReady Containers. Running OpenShift locally should be as easy as crc setup followed by crc start. Also, be sure to install the oc tool; we will use it throughout the examples.

Kubernetes Operators in Depth

There's lots of reasons to build an operator from scratch. Typically it's either a development team who are creating a first-party operator for their product, or a devops team looking to automate the management of 3rd party software. Either way, the development process starts with identifying what cases the operator should manage. At their most basic operators handle deployment. Creating a database in response to an API resource could be as simple as kubectl apply. But this is little better than the built-in Kubernetes resources such as StatefulSets or Deployments. Where operators begin to provide value is with more complex operations. What if you wanted to scale your database? With a StatefulSet you could perform kubectl scale statefulset my-db --replicas 3, and you would get three instances. But what if those instances require different configuration? Do you need to specify one instance to be the primary, and the others replicas? What if there are setup steps needed before adding a new replica? In this case an operator can configure these settings with an understanding of the specific application.

How to Become a Cyber Security Engineer?

Once you’ll get done with all these required skills, now it’s time to do the practical implementation and gain some hands-on experience in this particular field. You can opt for several internships or training programs to get the opportunities of working on live projects real-time environment. Furthermore, you can apply for some entry-level jobs as well in the Cyber Security domain such as Cyber Security Analyst, Network Analyst, etc. to gain the utmost exposure. Meanwhile, this professional experience will not only allow you to understand the core functioning of the Cyber Security field such as the design & implementation of secure network systems, monitoring, and troubleshooting, risk management, etc. but is also crucial for building a successful career as a Cyber Security Engineer as almost every company requires a professional experience of around 2-3 years while hiring for the Cyber Security Engineers. ... Here comes one of the most prominent parts of this journey – Certifications!! Now, there is a question that often arises in the minds of individuals that if a person is having an appropriate skill set along with the required experience then why would he need to go for such certifications?

Microsoft announces cloud innovation to simplify security, compliance, and identity

Our compliance cloud solutions help customers more easily navigate today’s biggest risks, from managing data or finding insider threats to dealing with legal issues or even addressing standards and regulations. We’ve listened to customers and invested heavily in a set of solutions to help them modernize and keep pace with the evolving and complex compliance and risk management challenges they face. One of our key investment areas is the set of Data Loss Prevention products in Microsoft 365. We recently announced the public preview of Microsoft Endpoint Data Loss Prevention (DLP), which means customers can now identify and protect data on devices. Today, we are announcing the public preview of integration between Microsoft Cloud App Security and Microsoft Information Protection, which extends Microsoft’s data loss prevention (DLP) policy enforcement framework to third-party cloud apps—such as Dropbox, Box, Google Drive, Webex, and more—for a consistent and seamless compliance experience Customers struggle to keep up with the constantly changing regulations around data protection. 

Blockchain / Distributed Ledger Technology (DLT)

Blockchain technologies including DLTs are a wonderful example how an ingenious combination of several (known) technologies was able (in 2009) to create a wholly new approach to a very old (database) problem: namely, how to reliably replicate state in an unreliable or even adversarial environment. The generalization of the notions of (i) crypto currencies (such as Bitcoin) to wholly generic crypto assets and (ii) of simple crypto token-moving transactions into smart contracts executing between untrusting parties goes beyond naïve database paradigms such as stored procedures. Today, many different DLTs exist, each optimizing different sets of nonfunctional requirements. Furthermore, the so-called “blockchain trilemma” of simultaneously providing scalability, security, and decentralization, has not been fully solved today. (Bitcoin providing ca. 5 transactions per second, Ethereum ca. 10 tps). Blockchain and DLTs are still a considerably overhyped technology looking for business problems they solve better than any existing alternative (e.g., a central SaaS). Despite many claims to the contrary, almost no real productive use cases exist except crypto exchanges.

Blockchain’s untapped potential in revolutionising procurement

Ardent supporters of this technology argue that it is the most significant innovation since the dawn of the internet. Today, blockchain technology has found adoption in nearly every industry, including retail, healthcare and manufacturing. Blockchain technology started in 2008 as a platform on which cryptocurrencies, such as bitcoin, function. Since then blockchain technology has undergone continuous improvement, finding numerous use-cases and applications. Don & Alex Tapscott, authors of Blockchain Revolution (2016), describe blockchain as “an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value”. Utilizing sophisticated algorithms, it maintains an immutable log of information and is able to securely transfer digital assets between network participants. The distributed ledger is accessible to all nodes on the network and everyone is able to access the same information. New information can be appended but the original data cannot be altered.

Quote for the day:

"The role of leadership is to transform the complex situation into small pieces and prioritize them." -- Carlos Ghosn