Showing posts with label visualization. Show all posts
Showing posts with label visualization. Show all posts

Daily Tech Digest - September 12, 2024

Navigating the digital economy: Innovation, risk, and opportunity

As we move towards the era of Industry 5.0, Digital Economy needs to adopt Human Centred Design (HCD) approach where technology layers revolve around the Human’s as the core. By 2030, it is envisaged to have Organoid Intelligence (OI) to rule the digital economy space with its potential across multi-disciplines with Super Intelligent capabilities. This capability shall democratize digital economy services across sectors in a seamless manner. This rapid technology adoption exposes the system to cyber risks which calls for advanced future security solutions such as Quantum Security embedded with digital currencies such as e-Rupee, crypto-currency, etc. ‘e-rupee’, a virtual equivalent of cash stored in a digital wallet, offers anonymity in payments. ... Indian banks are already piloting blockchain for issuing Letters of Credit, and integrating UPI with blockchain could combine the strengths of both systems, ensuring greater security, ease of use, and instant transactions. Such cyber security threats, also create opportunity for Bit-coin or Crypto-currencies to expand from its current offering towards sectors such as gaming, etc. 


From DevOps to Platform Engineering: Powering Business Success

Platform engineering provides a solution with the tools and frameworks needed to scale software delivery processes, ensuring that organizations can handle increasing workloads without sacrificing quality or speed. It also leads to improved consistency and reliability. By standardizing workflows and automating processes, platform engineering reduces the variability and risk associated with manual interventions. This leads to more consistent and reliable deployments, enhancing the overall stability of applications in production. Further productivity comes from the efficiency it offers developers themselves. Developers are most productive when they can focus on writing code and solving business problems. Platform engineering removes the friction associated with provisioning resources, managing environments, and handling operational tasks, allowing developers to concentrate on what they do best. It also provides the infrastructure and tools needed to experiment, iterate, and deploy new features rapidly, enabling organizations to stay ahead of the curve.


Scaling Databases To Meet Enterprise GenAI Demands

A hybrid approach combines vertical and horizontal scalability, providing flexibility and maximizing resource utilization. Organizations can begin with vertical scaling to enhance the performance of individual nodes and then transition to horizontal scaling as data volumes and processing demands increase. This strategy allows businesses to leverage their existing infrastructure while preparing for future growth — for example, initially upgrading servers to improve performance and then distributing the database across multiple nodes as the application scales. ... Data partitioning and sharding involve dividing large datasets into smaller, more manageable pieces distributed across multiple servers. This approach is particularly beneficial for vector databases, where partitioning data improves query performance and reduces the load on individual nodes. Sharding allows a vector database to handle large-scale data more efficiently by distributing the data across different nodes based on a predefined shard key. This ensures that each node only processes a subset of the data, optimizing performance and scalability.


Safeguarding Expanding Networks: The Role of NDR in Cybersecurity

NDR plays a crucial role in risk management by continuously monitoring the network for any unusual activities or anomalies. This real-time detection allows security teams to catch potential breaches early, often before they can cause serious damage. By tracking lateral movements within the network, NDR helps to contain threats, preventing them from spreading. Plus, it offers deep insights into how an attack occurred, making it easier to respond effectively and reduce the impact. ... When it comes to NDR, key stakeholders who benefit from its implementation include Security Operations Centre (SOC) teams, IT security leaders, and executives responsible for risk management. SOC teams gain comprehensive visibility into network traffic, which reduces false positives and allows them to focus on real threats, ultimately lowering stress and improving their efficiency. IT security leaders benefit from a more robust defence mechanism that ensures complete network coverage, especially in hybrid environments where both managed and unmanaged devices need protection.


Application detection and response is the gap-bridging technology we need

In the shared-responsibility model, not only is there the underlying cloud service provider (CSP) to consider, but there are external SaaS integrations and internal development and platform teams, as well as autonomous teams across the organization often leading to opaque systems with a lack of clarity around where responsibilities begin and end. On top of that, there are considerations around third-party dependencies, components, and vulnerabilities to address. Taking that further, the modern distributed nature of systems creates more opportunities for exploitation and abuse. One example is modern authentication and identity providers, each of which is a potential attack vector over which you have limited visibility due to not owning the underlying infrastructure and logging. Finally, there’s the reality that we’re dealing with an ever-increasing velocity of change. As the industry continues further adoption of DevOps and automation, software delivery cycles continue to accelerate. That trend is only likely to increase with the use of genAI-driven copilots. 


Data Is King. It Is Also Often Unlicensed or Faulty

A report published in the Nature Machine Intelligence journal presents a large-scale audit of dataset licensing and attribution in AI, analyzing over 1,800 datasets used in training AI models on platforms such as Hugging Face. The study revealed widespread miscategorization, with over 70% of datasets omitting licensing information and over 50% containing errors. In 66% of the cases, the licensing category was more permissive than intended by the authors. The report cautions against a "crisis in misattribution and informed use of popular datasets" that is driving recent AI breakthroughs but also raising serious legal risks. "Data that includes private information should be used with care because it is possible that this information will be reproduced in a model output," said Robert Mahari, co-author of the report and JD-PhD at MIT and Harvard Law School. In the vast ocean of data, licensing defines the legal boundaries of how data can be used. ... "The rise in restrictive data licensing has already caused legal battles and will continue to plague AI development with uncertainty," said Shayne Longpre, co-author of the report and research Ph.D. candidate at MIT. 


AI interest is driving mainframe modernization projects

AI and generative AI promise to transform the mainframe environment by delivering insights into complex unstructured data, augmenting human action with advances in speed, efficiency and error reduction, while helping to understand and modernize existing applications. Generative AI also has the potential to illuminate the inner workings of monolithic applications, Kyndryl stated. “Enterprises clearly see the potential with 86% of respondents confirmed they are deploying, or planning to deploy, generative AI tools and applications in their mainframe environments, while 71% say that they are already implementing generative AI-driven insights as part of their mainframe modernization strategy,” Kyndryl stated. ... While AI will likely shape the future for mainframes, a familiar subject remains a key driver for mainframe investments: security. “Given the ongoing threat from cyberattacks, increasing regulatory pressures, and an uptick in exposure to IT risk, security remains a key focus for respondents this year with almost half (49%) of the survey respondents cited security as the number one driver of their mainframe modernization investments in the year ahead,” Kyndryl stated.


How AI Is Propelling Data Visualization Techniques

AI has improved data processing and cleaning. AI identifies missing data and inconsistencies, which means we end up with more reliable datasets for effective visualization. Personalization is yet another benefit AI has brought. AI-powered tools can tailor visualizations based on set goals, context, and preferences. For example, a user can provide their business requirements, and AI will provide a customized chart and information layout based on these requirements. This saves time and can also be helpful when creativity isn’t flowing as well as we’d like. ... It’s useful for geographic data visualization in particular. While traditional maps provide a top-down perspective, AR mapping systems use existing mapping technologies, such as GPS, satellite images, and 3D models, and combine them with real-time data. For example, Google’s Lens in Maps feature uses AI and AR to help users navigate their surroundings by lifting their phones and getting instant feedback about the nearest points of interest. Business users will appreciate how AI automates insights with natural language generation (NGL). 


Framing The Role Of The Board Around Cybersecurity Is No Longer About Risk

Having set an unequivocal level of accountability with one executive for cybersecurity, the Board may want to revisit the history of the firm with regards to cyber protection, to ensure that mistakes are not repeated, that funding is sufficient and overall, that the right timeframes are set and respected, in particular over the mid to long-term horizon if large scale transformative efforts are required around cybersecurity. We start to see a list of topics emerging, broadly matching my earlier pieces, around the “key questions the Board should ask”, but more than ever, executive accountability is key in the face of current threats to start building up a meaningful and powerful top-down dialogue around cybersecurity. Readers may notice that I have not used the word “risk” even once in this article. Ultimately, risk is about things that may or may not happen: In the face of the “when-not-if” paradigm around cyber threats – and increasingly other threats as well – it is essential for the Board to frame and own business protection as a topic rooted in the reality of the world we live in, not some hypothetical matter which could be somehow mitigated, transferred or accepted.


Embracing First-Party Data in a Cookie-Alternative World

Unfortunately, the transition away from third-party cookies presents significant challenges that extend beyond shifting customer interactions. Many businesses are particularly concerned about the implications for data security and privacy. When looking into alternative data sources, businesses may inadvertently expose themselves to increased security risks. The shift to first-party data collection methods requires careful evaluation and implementation of advanced security measures to protect against data breaches and fraud. It is also crucial to ensure the transition is secure and compliant with evolving data privacy regulations. To ensure the data is secure, businesses should go beyond standard encryption practices and adopt advanced security measures such as tokenization for sensitive data fields, which minimizes the risk of exposing real data in the event of a breach. Additionally, regular security audits are crucial. Organizations should leverage automated tools for continuous security monitoring and compliance checks that can provide real-time alerts on suspicious activities, helping to preempt potential security incidents. 



Quote for the day:

“It's not about who wants you. It's about who values and respects you.” -- Unknown

Daily Tech Digest - December 12, 2022

14 lessons CISOs learned in 2022

Ransomware attacks have increased in 2022, with companies and government entities among the most prominent targets. Nvidia, Toyota, SpiceJet, Optus, Medibank, the city of Palermo, Italy, and government agencies in Costa Rica, Argentina, and the Dominican Republic were among the victims in 2022, a year in which the lines between financially and politically motivated ransomware groups continued to be blurred. A critical piece of any organization's defense strategy should be employee awareness and training because "employees continue to be targeted in threat actor strategies through phishing and other social engineering means," says Gary Brickhouse, CISO at GuidePoint Security. ... Organizations should also do more to keep up with vulnerabilities in both open- and closed-source software. However, this is no easy task since thousands of bugs surface yearly. Vulnerability management tools can help identify and prioritize vulnerabilities found in operating systems applications.


Grow your own CIO: Building leadership and succession plans

To ensure the long-term health of the company, tech chiefs must focus on building up that middle tier of IT leaders, a reality many CIOs are only now recognizing the need to address. “There are not enough people out there — you have to develop your own people,’’ says Roberts, who estimates that only 10% to 20% of companies are “being intentional about doing formal development programs.’’ Mike Eichenwald, a senior client partner at Korn Ferry Consulting, agrees that it’s important to elevate individuals from vertical leadership roles within the pillars of infrastructure, engineering, product, and security to enterprise leadership roles. With technology converging in all aspects of the business, doing so will help organizations leverage the diversity of experience those midlevel managers have under their belts, and their learning curve and degree of risk will be minimized, Eichenwald says. “Unfortunately, organizations miss an opportunity to cultivate that talent internally and often find themselves needing to reach out to the [external] market to bring it in,’’ he adds.


Open source security fought back in 2022

Anyone paying attention to open source for the past 20 years—or even the past two—will not be surprised to see commercial interests start to flourish around these popular open source technologies. As has become standard, that commercial success is usually spelled c-l-o-u-d. Here's one prominent example: On December 8, 2022, Chainguard, the company whose founders cocreated Sigstore while at Google, released Chainguard Enforce Signing, which enables customers to use Sigstore-as-a-service to generate digital signatures for software artifacts inside their own organization using their individual identities and one-time-use keys. This new capability helps organizations ensure the integrity of container images, code commits, and other artifacts with private signatures that can be validated at any point an artifact needs to be verified. It also allows a dividing line where open source software artifacts are signed in the open in a public transparency log; however, enterprises can sign their own software with the same flow, but with private versions that aren’t in the public log. 


Turning the vision of a utopic smart city into reality

It’s critical to consider what success looks like, and this can be measured by how user-friendly and efficient a service is, as well as cost efficiencies. For instance, reducing the time to find a parking space in a new city from an hour to just a few minutes when using parking apps which can indicate spaces and process payment. It’s almost impossible to consider smart cities without thinking about the efficient energy management benefits of smart buildings. Sustainable initiatives such as integrated workplace management systems already have the capability to monitor over 50,000 data points per second, analyse data, and send it to mobile apps. This could see millions of users saving energy. With a long-term vision for smart city platforms to become unified or standardised, one solution can potentially work seamlessly anywhere in the world. Platforms could integrate city infrastructure and navigation, and access to emergency and city services. Transformation will be driven by users empowered with the right data, perhaps even according to their user type of student, tourist, or city resident.


Can real-time data visualisation deliver trust and opportunity?

What is interesting is that so much of this is driven through an ecosystem of partners. No one organisation can deliver the breadth and depth of data and tools needed to make such projects work and there is much to learn from that. Collaborations and partnerships can elevate and enhance real-time data visualisation and value. For many organisations however, real-time data is still virgin territory and real-time visualisation is one of those technologies where reality cannot hope to match expectation, at least according to Jaco Vermeulen, CTO of tech consultancy BML Digital. “Almost every customer says they want real-time visualisation, but then nine out of 10 can’t qualify why they need it, especially when it comes to what decisions or actions it will enable,” says Vermeulen. “This is usually because they start from the belief that the data is always available and therefore should be immediately understandable and yield profound insight. The truth is a bit more challenging.” ... “It is the real-time decisions that create impact,” he says. “Optimising supply chains, reducing waste and pollution, optimising operations, and informing and satisfying consumers. 


IBM’s Krishnan Talks Finding the Right Balance for AI Governance

The challenge comes essentially from not knowing how the sausage was made. One client, for instance, had built 700 models but had no idea how they were constructed or what stages the models were in, Krishnan said. “They had no automated way to even see what was going on.” The models had been built with each engineer’s tool of choice with no way to know further details. As result, the client could not make decisions fast enough, Krishnan said, or move the models into production. She said it is important to think about explainability and transparency for the entire life cycle rather than fall into the tendency to focus on models already in production. Krishnan suggested that organizations should ask whether the right data is being used even before something gets built. They should also ask if they have the right kind of model and if there is bias in the models. Further, she said automation needs to scale as more data and models come in. The second trend Krishan cited was the increased responsible use of AI to manage risk and reputation to instill and maintain confidence in the organization. 


13 tech predictions for 2023

“Different edges are implemented for different purposes. Edge servers and gateways may aggregate multiple servers and devices in a distributed location, such as a manufacturing plant. An end-user premises edge might look more like a traditional remote/branch office (ROBO) configuration, often consisting of a rack of blade servers. Telecommunications providers have their own architectures that break down into a provider far edge, a provider access edge, and a provider aggregation edge. ... As we enter 2023, CIOs have earned a seat among the decision-makers and are now at the helm of company-wide technology decision-making. Amid a volatile economic climate, IT leaders must prioritize reducing costs, but they are finding themselves pulled between contrasting concerns of managing spend, dealing with security risks, and fostering innovation. As they navigate an uncertain market, CIOs will need to analyze company usage, along with their previous experience, to rethink business approaches and make decisions. The goal is to identify ways to reduce spend across the company, but not at the expense of key areas like cybersecurity and innovation. 


Preventing a ransomware attack with intelligence: Strategies for CISOs

One of the most effective ways to stop a ransomware attack is to deny them access in the first place; without access, there is no attack. The adversary only needs one route of access, and yet the defender has to be aware and prevent all entry points into a network. Various types of intelligence can illuminate risk across the pre-attack chain—and help organizations monitor and defend their attack surfaces before they’re targeted by attackers. The best vulnerability intelligence should be robust and actionable. For instance, with vulnerability intelligence that includes exploit availability, attack type, impact, disclosure patterns, and other characteristics, vulnerability management teams predict the likelihood that a vulnerability could be used in a ransomware attack. With this information in hand, vulnerability management teams, who are often under-resourced, can prioritize patching and preemptively defend against vulnerabilities that could lead to a ransomware attack. Having a deep and active understanding of the illicit online communities where ransomware groups operate can also help inform methodology, and prevent compromise.


What to do when your devops team is downsized

If you lead teams or manage people, your first thought must be how they feel or how they are personally impacted by the layoffs. Some will be angry if they’ve seen friends and confidants let go; others may be fearful they’re next. Even when leadership does a reasonable job at communication (which is all too often not the case), chances are your teams and colleagues will have unanswered questions. Your first task after layoffs are announced is to open a dialogue, ask people how they feel, and dial up your active listening skills. Other steps to help teammates feel safe include building empathy for personal situations, energizing everyone around a mission, and thanking team members for the smallest wins. Use your listening skills to identify the people who have greater concerns and fears or who may be flight risks. You’ll want to talk to them individually and find ways to help them through their anxieties or recognize when they need professional help. You should also give people and teams time to reflect and adjust. Asking everyone to get back to their sprint commitments and IT tickets is insensitive and unrealistic, especially if the company laid off many people.


Our ChatGPT Interview Shows AI Future in Banking Is Scary-Good

ChatGPT is a large, advanced language processing model that is trained using a technique called generative pre-trained transformer, or GPT. This allows ChatGPT to generate human-like responses to questions and statements in a conversation, making it a powerful tool for a wide range of applications. Compared to traditional chatbots, which are often limited in their ability to understand and generate natural language, ChatGPT has the advantage of being able to provide more accurate and detailed responses. Additionally, because it is trained using a large amount of data, ChatGPT is able to learn and adapt to different conversational styles and contexts, making it more versatile and capable of handling a wider range of scenarios. ... The banking industry can use ChatGPT technology in a number of ways to improve their operations and provide better service to their customers. For example, ChatGPT can be used to automate customer service tasks, such as answering frequently asked questions or providing detailed information about products and services. This can free up customer service representatives to focus on more complex or high-value tasks, improving overall efficiency and customer satisfaction.



Quote for the day:

"Strong leaders encourage you to do things for your own benefit, not just theirs." -- Tim Tebow

Daily Tech Digest - July 30, 2022

Google Drive vs OneDrive: Which cloud solution is right for you?

Google Drive is host to the majority of the cloud storage features that individuals have come to expect. Even with the free plan, users get access to a web interface, a mobile app, and sharing settings that can be adjusted at the admin level. Microsoft OneDrive users will enjoy similar functionality, including automatic syncing, where users indicate the files and folders they want to be backed up, so they are automatically synced with copies in the cloud. One of the biggest divides facing users when determining whether Google Drive or OneDrive is the best fit for them concerns their operating system of choice. ... Fans of Word, Excel, and the like, can still use Google Drive but may have to convert documents into Docs, Sheets, and other Google-made alternatives. That’s not a major issue but might affect how you perceive the performance of each cloud solution. Although there’s not much to choose from in terms of performance, it’s worth pointing out that Microsoft Office, which is usually employed as an offline tool, will take up more storage space than Google Workspace, which can be accessed via your web client. If storage is a major concern for you, this might be worth keeping in mind.


XaaS isn’t everything — and it isn’t serviceable

BPOs and XaaS do share a characteristic that might, in some situations, be a benefit but in most cases is a limitation, namely, the need to commoditize. This requirement isn’t a matter of IT’s preference for simplification, either. It’s driven by business architecture’s decision-makers’ preference for standardizing processes and practices across the board. This might not seem to be an onerous choice, but it can be. Providing a service that operates the same way to all comers no matter their specific and unique needs might cut immediate costs but can be, in the long run, crippling. Imagine, for example, that Human Resources embraces the Business Services Oriented Architecture approach, offering up Human Resources as a Service to its internal customers. As part of HRaaS it provides Recruiting as a Service (RaaS). And to make the case for this transformation it extols the virtues of process standardization to reduce costs. Imagine, then, that you’re responsible for Store Operations for a highly seasonal retailer, one that has to ramp up its in-store staffing from Black Friday through Boxing Day. 


Myth Busting: 5 Misconceptions About FinOps

“FinOps is an evolving cloud financial management discipline and cultural practice that enables organizations to get maximum business value by helping engineering, finance, technology and business teams to collaborate on data-driven spending decisions.” ... With traditional procurement models, central teams retained visibility and control over expenditures. While this would add layers of time and effort to purchases, this was accepted as a worthwhile tradeoff. Part of the reason for FinOps has come into existence is that it enables teams to break away from the rigid, centrally controlled procurement models that used to be the norm. Rather than having a finance team that acts as a central gatekeeper and bottleneck, FinOps enables teams to fully leverage opportunities available for automation in the cloud. Compared to rigid, monthly, or quarterly budget cycles—and being blindsided by cost overruns long after the fact—teams move to continuous optimization. Real-time reporting and just-in-time processes are two of the core principles of FinOps. 


Selenium vs Cypress: Does Cypress Replace Selenium?

Cypress test framework captures snapshots during test execution. It enables QAs or software developers to hover over a precise command in the Command Log to notice exactly what happened at that specific phase. One does not require adding implicit or explicit wait commands in testing scripts, unlike Selenium. It waits for assertions and commands automatically. QAs or Developers can use Stubs, Clocks, and Spies to validate and control the behavior of server responses, timers, or functions. The automatic scrolling operation makes sure that the component is in view prior to performing any activity (for instance Clicking on a button). Previously Cypress supported only Google Chrome tests but, with current updates, Cypress now offers support for Mozilla Firefox as well as Microsoft Edge browsers. As the developers or programmer writes commands, this tool executes them in real-time, giving visual feedback as they run. It also carries brilliant documentation. Test execution for a local Se Grid can be ported to function with a cloud-based Selenium Grid with the least effort.


5 Advanced Robotics And Industrial Automation Technologies

One of the most critical advances in robotics and industrial automation technologies is the development of autonomous vehicles. These vehicles can drive themselves, making them safer and more efficient than traditional vehicles. Autonomous vehicles can be used in a variety of ways. For example, they can be used to transport goods around a factory. They can also be used to help people search for objects or people. In all cases, autonomous vehicles are much safer than traditional vehicles. As autonomous vehicles become more common, they will significantly impact the automotive industry. They will reduce the time people need to spend driving cars. They will also reduce the number of accidents that happen on the road. ... One of the most critical safety features of advanced robotics and industrial automation technologies is their danger detection systems. These systems help to protect workers from dangerous situations. One type of danger detection system is the automatic emergency braking system. This system uses cameras and sensors to detect obstacles on the road and brake automatically if necessary.


How to use the Command pattern in Java

The Command pattern is one of the 23 design patterns introduced with the Gang of Four design patterns. Command is a behavioral design pattern, meaning that it aims to execute an action in a specific code pattern. When it was first introduced, the Command pattern was sometimes explained as callbacks for Java. While it started out as an object-oriented design pattern, Java 8 introduced lambda expressions, allowing for an object-functional implementation of the Command pattern. This article includes an example using a lambda expression in the Command pattern. As with all design patterns, it's very important to know when to apply the Command pattern, and when another pattern might be better. Using the wrong design pattern for a use case can make your code more complicated, not less. We can find many examples of the Command pattern in the Java Development Kit, and in the Java ecosystem. One popular example is using the Runnable functional interface with the Thread class. Another is handling events with an ActionListener.


Singleton Design Pattern in C# .NET Core – Creational Design Pattern

The default constructor of the Singleton class is private, by making the constructor private the client code has been restricted from directly creating the instance of the Singleton class. In absence of the public constructor, the only way to get the object of the Singleton class is to use the global method to request an object i.e. the static GetInstance() method in the Singleton class should be used to get the object of the class. The GetInstance() method creates the object of the Singleton class when it is called for the first time and returns that instance. All the subsequent requests for objects to the GetInstance() method will get the same instance of the Singleton class which was already created during the first request. This standard implementation is also known as lazy instantiation as the object of the singleton class is created when it is required i.e. when there is a request for the object to the GetInstance() method. The main problem with the standard implementation is that it is not thread-safe. Consider a scenario where 2 different requests hit the GetInstances() method at the same time so in that case, there is a possibility that two different objects might get created by both the requests.


Here’s when to use data visualization tools, and why they’re helpful

Successful data visualization tools will help you understand your audience, set up a clear framework to interpret data and draw conclusions, and tell a visual story that might not come off as clean and concise with raw data points. Data visualization tools—when used properly—will help to better tell a given story and make it possible to better pull information, see trending patterns, and draw conclusions from large data sets. Data visualization tools also lean into a more aesthetically pleasing approach to mapping and tracking data. It goes beyond simply pasting information onto a pie chart and instead uses design know-how, color theory, and other practices to ensure information is presented in an interesting but easy-to-understand manner. Although data visualization tools have always been popular in the design space, the right data visualization tools can aid just about any field of work or personal interest. For example, data visualization tools can help journalists and editors track trending news stories to better understand reader interest.


9 Tips for Modernizing Aging IT Systems

Once you’ve identified where the failures are in aging systems, compute the costs in fixes, patches, upgrades, and add-ons to bring the system up to modern requirements. Now add any additional costs likely to be incurred in the near future to keep this system going. Compare the total to other available options, including a new or newer system. “While this isn’t a one-size-fits-all approach, the last 2.5 years have proven just how quickly priorities can change,” says Brian Haines, chief strategy officer for FM:Systems, an integrated workspace management system software provider. “Rather than investing in point solutions that may serve the specific needs of the organization today, a workplace tech solution that offers the ability to add or even remove certain functions later to the same system means organizations can more efficiently respond to ever-changing business, employee, workplace, visitor and even asset needs going forward.” “This also helps IT teams drastically reduce the time needed to shop for, invest in, and deploy a separate solution that may or may not be compatible,” Haines adds.


CISA Post-Quantum Cryptography Initiative: Too Little, Too Late?

Mike Parkin, senior technical engineer at Vulcan Cyber, a provider of SaaS for enterprise cyber risk remediation, agreed that the move comes a little late, but said CISA’s initiative is still a good step. “People have been saying for years that the development of quantum computing would lead to the end of cryptography as we know it,” he said. “With developments in the field bringing us closer to a usable quantum computer, it’s past time to think about how to deal with the future of cryptography.” He pointed out the modern internet relies heavily on cryptography across the board, and quantum computing has the potential to break a lot of that encryption, rendering it effectively useless. “That, in turn, would effectively break many of the internet services we’ve all come to rely on,” Parkin said. “Quantum computing is not yet to the point of rendering conventional encryption useless—at least that we know of—but it is heading that way.” He said he believes the government is in the position to set encryption standards and expectations for normal use and can work closely with industry to make sure the standards are both effective and practical.



Quote for the day:

"It is better to fail in originality than to succeed in imitation." -- Herman Melville

Daily Tech Digest - November 29, 2021

The Next Evolutions of DevOps

The old adage is that complexity is like an abacus: You can shift complexity around, but it never really goes away. With the movement to shift responsibility left to development teams, this also means that associated complexity is shifting to the development teams. Modern platform engineering teams provide the infrastructure (compliant Kubernetes clusters) to teams and any workload that is run on those clusters is up to the development team that owns it. Typically, development teams then focus on features and functionality. ... If you are a DevOps or platform engineer, making your internal customers—your development teams—successful is a great goal to work toward. Crucial to this is disseminating expertise. This can be in the form of automation and education. A common practice with the DevSecOps movement is to have some sort of scanning step as part of the build or deployment process, disseminating the internals as far as how the scan is performed, what happens if something is found, etc.


Fast-paced dash to digital leaves many public services exposed

When organisations introduce new solutions to their technology stack, protection capabilities need to be extended to cover it. But faced with a global pandemic that no one could’ve seen coming, businesses needed to innovate fast, and their security measures failed to keep pace. This created a vulnerability lag, where systems and data have been left unprotected and open to attack. Veritas’ Vulnerability Lag Report explores how this gap between innovation and protection is affecting a variety of organisations, public and private; only three-fifths (61%) believe their organisation’s security measures have fully kept up since the implementation of COVID-led digital transformation initiatives. This means 39% are experiencing some form of security deficit. While such swift digital transformation has delivered a wealth of benefits for public sector organisations, there is a dark side to this accelerated innovation. In the rush to digitally transform, security has taken a back seat. As a result, there may be significant gaps just waiting for cyber criminals to exploit for their own gain.


Towards Better Data Engineering: Mostly People, But Also Process and Technology

Traditional software engineering practices involve designing, programming, and developing software that is largely stateless. On the other hand, data engineering practices focus on scaling stateful data systems and dealing with different levels of complexity. ... Setting up a data engineering culture is therefore crucial for companies to aim for long-term success. “At Sigmoid, these are the problems that we’re trying to tackle with our expertise in data engineering and help companies build a strong data culture,” said Mayur. With expertise in tools such as Spark, Kafka, Hive, Presto, MLflow, visualization tools, SQL, and open source technologies, the data engineering team at Sigmoid helps companies with building scalable data pipelines and data platforms. It allows customers to build data lakes, cloud data warehouses and set up DataOps and MLOps practices to operationalize the data pipelines and analytical model management. Transitioning from a software engineering environment to data engineering is a significant ‘cultural change’ for most companies. 


Performing Under Pressure

Regardless of the task, pressure ruthlessly diminishes our judgment, decision-making, focus, and performance. Pressure moments can disrupt our thoughts, prevent us from thinking clearly, feel frustrated, and make us act in undesirable ways. The adverse impact of pressure on our cognitive skills can downgrade our performance, make us perform below our capability, commit more errors and increase the likelihood of failure. Pressure can even make us feel embarrassed and shameful when we do fail because we can act in a way that we will otherwise not act and say or do unusual things. Consider these pressure moments. Stepping out of an important client meeting and wondering “why did I make that joke. I was so stupid” or failing to share your opinion while participating in a critical decision meeting and thinking afterward, “Why didn’t I speak up? We could have made a better decision.” Pressure can either result in wrongful action or inaction. Such events make it much more difficult to deal with the pressure next time. But there are things you can do to diminish the effects of pressure on your performance.
 

Behavioral biometrics: A promising tool for enhancing public safety

There are several promising applications in the field of behavioral biometrics. For computer-based identity verification, there are solutions that allow identification based on keystrokes—the frequency and patterns of which prove to be individual enough to recognize identity. Due to the nature of typing, the models can also get better because they can continuously monitor and analyze keystroke data. Software developers tend to also customize confidence thresholds depending on the use case. However, in some cases, the reliability of this behavioral biometric factor is limited to the circumstances. On a different keyboard, individual patterns may differ, and physical conditions like carpal tunnel syndrome or arthritis may affect unique abilities. The lack of benchmarks makes it difficult to compare different providers’ trained algorithms in these cases, providing room for false marketing claims. Image analysis for image recognition can provide more data for behavioral research. Gait and posture biometrics are rapidly becoming useful tools, even if they do not yet match the accuracy and robustness of traditional biometric approaches.


Privacy in Decentralized Finance: Should We Be Concerned?

It is alarming that the pace of DeFi’s growing influence is so fast-paced because many of the issues it presents are not addressed or solved enough in depth. People are investing in all sorts of cryptocurrency before they even educate themselves on how to manage private keys properly. Coupled with the lag in robust protective regulation, the general lack of awareness for DeFi’s threats to privacy inevitably results in large populations of users that are vulnerable to attack. Though some progress has been made at the state level to set standards for blockchain, there is a greater need for industry standardization at the international level. Additionally, the rapid expansion of blockchain technology in many industries is not met with sufficient safety protocols. As such, cybercriminals are aggressively taking action to target both users and exchanges of cryptocurrency in its under-secured state. On the flip side, there are some aspects about DeFi that are directly beneficial to protecting the privacy of users. When comparing the decentralized network that DeFi uses to a centralized one, DeFi’s “peer-to-peer” model is preferable because it prevents a “single source of failure”. 


Hackers Exploit MS Browser Engine Flaw Where Unpatched

The modus operandi of these attackers parallels that of the Iranian attackers, in that it follows the same execution steps. But the researchers did not specify whether the intent of this campaign appeared to be data exfiltration. AhnLab did not respond to Information Security Media Group's request for additional information. With multiple attackers actively exploiting CVE-2021-40444, firms using Microsoft Office should immediately update their software to the latest version as a prevention measure, say researchers from EST Security, which discovered yet another campaign targeting the vulnerability. In this case, the campaign used communications that attempted to impersonate the president of North Korea's Pyongyang University of Science and Technology. "The North Korean cyberthreat organization identified as the perpetrator behind this campaign is actively introducing document-based security vulnerabilities such as PDF and DOC files to customized targeted attacks such as CVE-2020-9715 and CVE-2021-40444," the EST Security researchers say. CVE-2020-9715 is a vulnerability that allows remote attackers to execute arbitrary code on affected installations of Adobe Acrobat Reader DC.


Data Mesh: an Architectural Deep Dive

Data mesh is a paradigm shift in managing and accessing analytical data at scale. Some of the words I highlighted here are really important, first of all, is the shift. I will justify why that's the case. Second is an analytical data solution. The word scale really matters here. What do we mean by analytical data? Analytical data is an aggregation of the data that gets generated running the business. It's the data that fuels our machine learning models. It's the data that fuels our reports, and the data that gives us an historical perspective. We can look backward and see how our business or services or products have been performing, and then be able to look forward and be able to predict, what is the next thing that a customer wants? Make recommendations and personalizations. All of those machine learning models can be fueled by analytical data. What does it look like? Today we are in this world with a great divide of data. The operational data is the data that sits in the databases of your applications, your legacy systems, microservices, and they keep the current state. 


Google Data Studio Vs Tableau: A Comparison Of Data Visualization Tools

Business analysts and data scientists rely on numerous tools like PowerBI, Google Data Studio, Tableau, and SAP BI, among others, to decipher information from data and make business decisions. Coming from one of the best companies in the world, Google Data Studio, launched in 2016, is a data visualisation platform for creating reports using charts and dashboards. Tableau, on the other hand, was founded more than a decade before Google Data Studio in 2003 by Chris Stolte, Pat Hanrahan, and Christian Chabot. Tableau Software is one of the most popular visual analytics platforms with very strong business intelligence capabilities. The tool is free, and the user can log in to it by using their Google credentials. Over the years, it has become a popular tool to visualise trends in businesses, keep track of client metrics, compare time-based performance of teams, etc. It is a part of the Google Marketing Platform and downloads data from Google’s marketing tools to create reports and charts. Recently, Google announced that users can now include Google Maps in embedded reports in Google Data Studio.


5 Trends Increasing the Pressure on Test Data Provisioning

Not only is the pace of system change growing; the magnitude of changes being made to complex systems today can be greater than ever. This presents a challenge to slow and overly manual data provisioning, as a substantial chunk of data might need updating or replacing based on rapid system changes. A range of practices in development have increased the rate and scale of system change. The adoption of containerization, source control, and easily reusable code libraries allow parallelized developers to rip and replace code at lightning speed. They can easily deploy new tools and technologies, developing systems that are now intricately woven webs of fast-shifting components. A test data solution today must be capable of providing consistent test data “journeys” based on the sizeable impact of these changes across interrelated system components. Data allocation must occur at the pace with which developers chop-and-change reusable and containerised components. 



Quote for the day:

"One must be convinced to convince, to have enthusiasm to stimulate the others." -- Stefan Zweig

Daily Tech Digest - October 15, 2021

You’ve migrated to the cloud, now what?

When thinking about cost governance, for example, in an on-premises infrastructure world, costs increase in increments when we purchase equipment, sign a vendor contract, or hire staff. These items are relatively easy to control because they require management approval and are usually subject to rigid oversight. In the cloud, however, an enterprise might have 500 virtual machines one minute and 5,000 a few minutes later when autoscaling functions engage to meet demand. Similar differences abound in security management and workload reliability. Technologies leaders with legacy thinking are faced with harsh trade-offs between control and the benefits of cloud. These benefits can include agility, scalability, lower cost, and innovation and require heavy reliance on automation rather than manual legacy processes. This means that the skillsets of an existing team may be not the same skillsets needed in the new cloud order. When writing a few lines of code supplants plugging in drives and running cable, team members often feel threatened. This can mean that success requires not only a different way of thinking but also a different style of leadership.


A new edge in global stability: What does space security entail for states?

Observers recently recentred the debate on a particular aspect of space security, namely anti-satellite (ASAT) technologies. The destruction of assets placed in outer space is high on the list of issues they identify as most pressing and requiring immediate action. As a result, some researchers and experts rolled out propositions to advance a transparent and cooperative approach, promoting the cessation of destructive operations in both outer space and launched from the ground. One approach was the development of ASAT Test Guidelines, first initiated in 2013 by a Group of Governmental Experts on Outer Space Transparency and Confidence-Building Measures. Another is through general calls to ban anti-satellite tests, to not only build a more comprehensive arms control regime for outer space and prevent the production of debris, but also reduce threats to space security and regulate destabilising force. Many space community members threw their support behind a letter urging the United Nations (UN) General Assembly to take up for consideration a kinetic anti-satellite (ASAT) Test Ban Treaty for maintaining safe access to Earth orbit and decreasing concerns about collisions and the proliferation of space debris.


From data to knowledge and AI via graphs: Technology to support a knowledge-based economy

Leveraging connections in data is a prominent way of getting value out of data. Graph is the best way of leveraging connections, and graph databases excel at this. Graph databases make expressing and querying connection easy and powerful. This is why graph databases are a good match in use cases that require leveraging connections in data: Anti-fraud, Recommendations, Customer 360 or Master Data Management. From operational applications to analytics, and from data integration to machine learning, graph gives you an edge. There is a difference between graph analytics and graph databases. Graph analytics can be performed on any back end, as they only require reading graph-shaped data. Graph databases are databases with the ability to fully support both read and write, utilizing a graph data model, API and query language. Graph databases have been around for a long time, but the attention they have been getting since 2017 is off the charts. AWS and Microsoft moving in the domain, with Neptune and Cosmos DB respectively, exposed graph databases to a wider audience.


Observability Is the New Kubernetes

So where will observability head in the next two to five years? Fong-Jones said the next step is to support developers in adding instrumentation to code, expressing a need to strike a balance between easy and out of the box and annotations and customizations per use case. Suereth said that the OpenTelemetry project is heading in the next five years toward being useful to app developers, where instrumentation can be particularly expensive. “Target devs to provide observability for operations instead of the opposite. That’s done through stability and protocols.” He said that right now observability right now, like with Prometheus, is much more focused on operations rather than developer languages. “I think we’re going to start to see applications providing observability as part of their own profile.” Suereth continued that the OpenTelemetry open source project has an objective to have an API with all the traces, logs and metrics with a single pull, but it’s still to be determined how much data should be attached to it.


Data Exploration, Understanding, and Visualization

Many scaling methods require knowledge of critical values within the feature distribution and can cause data leakage. For example, a min-max scaler should fit training data only rather than the entire data set. When the minimum or maximum is in the test set, you have reduced some data leakage into the prediction process. ... The one-dimensional frequency plot shown below each distribution provides understanding to the data. At first glance, this information looks redundant, but these directly address problems when representing data in histograms or as distributions. For example, when data is transformed into a histogram, the number of bins is specified. It is difficult to decipher any pattern with too many bins, and with too few bins, the data distribution is lost. Moreover, representing data as a distribution assumes the data is continuous. When data is not continuous, this may indicate an error in the data or an important detail about the feature. The one-dimensional frequency plots fill in the gaps where histograms fail.


DevSecOps: A Complete Guide

Both DevOps and DevSecOps use some degree of automation for simple tasks, freeing up time for developers to focus on more important aspects of the software. The concept of continuous processes applies to both practices, ensuring that the main objectives of development, operation, or security are met at each stage. This prevents bottlenecks in the pipeline and allows teams and technologies to work in unison. By working together, development, operational or security experts can write new applications and software updates in a timely fashion, monitor, log, and assess the codebase and security perimeter as well as roll out new and improved codebase with a central repository. The main difference between DevOps and DevSecOps is quite clear. The latter incorporates a renewed focus on security that was previously overlooked by other methodologies and frameworks. In the past, the speed at which a new application could be created and released was emphasized, only to be stuck in a frustrating silo as cybersecurity experts reviewed the code and pointed out security vulnerabilities.


Skilling employees at scale: Changing the corporate learning paradigm

Corporate skilling programs have been founded on frameworks and models from the world of academia. Even when we have moved to digital learning platforms, the core tenets of these programs tend to remain the same. There is a standard course with finite learning material, a uniformly structured progression to navigate the learning, and the exact same assessment tool to measure progress. This uniformity and standardization have been the only approach for organizations to skill their employees at scale. As a result, organizations made a trade-off; content-heavy learning solutions which focus on knowledge dissemination but offer no way to measure the benefit and are limited to vanity metrics have become the norm for training the workforce at large. On the other hand, one-on-one coaching programs that promise results are exclusive only to the top one or two percent of the workforce, usually reserved for high-performing or high-potential employees. This is because such programs have a clear, measurable, and direct impact on behavioral change and job performance.


The Ultimate SaaS Security Posture Management (SSPM) Checklist

The capability of governance across the whole SaaS estate is both nuanced and complicated. While the native security controls of SaaS apps are often robust, it falls on the responsibility of the organization to ensure that all configurations are properly set — from global settings, to every user role and privilege. It only takes one unknowing SaaS admin to change a setting or share the wrong report and confidential company data is exposed. The security team is burdened with knowing every app, user and configuration and ensuring they are all compliant with industry and company policy. Effective SSPM solutions come to answer these pains and provide full visibility into the company's SaaS security posture, checking for compliance with industry standards and company policy. Some solutions even offer the ability to remediate right from within the solution. As a result, an SSPM tool can significantly improve security-team efficiency and protect company data by automating the remediation of misconfigurations throughout the increasingly complex SaaS estate.


Why gamification is a great tool for employee engagement

Gamification is the beating heart of almost everything we touch in the digital world. With employees working remotely, this is the golden solution for employers. If applied in the right format, gaming can help create engagement in today's remote working environment, motivate personal growth, and encourage continuous improvement across an organization. ... In the connected workspace, gamification is essentially a method of providing simple goals and motivations that rely on digital rather than in-person engagement. At the same time, there is a tacit understanding among both game designer and "player" that when these goals are aligned in a way that benefits the organization, the rewards often impact more than the bottom line. Engaged employees are a valuable part of defined business goals, and studies show that non-engagement impacts the bottom line. At the same time, motivated employees are more likely to want to make the customer experience as satisfying as possible, especially if there is internal recognition of a job well done.


10 Cloud Deficiencies You Should Know

What happens if your cloud environment goes down due to challenges outside your control? If your answer is “Eek, I don’t want to think about that!” you’re not prepared enough. Disaster preparedness plans can include running your workload across multiple availability zones or regions, or even in a multicloud environment. Make sure you have stakeholders (and back-up stakeholders) assigned to any manual tasks, such as switching to backup instances or relaunching from a system restore point. Remember, don’t wait until you’re faced with a worst-case scenario to test your response. Set up drills and trial runs to make sure your ducks are quacking in a row. One thing you might not imagine the cloud being is … boring. Without cloud automation, there are a lot of manual and tedious tasks to complete, and if you have 100 VMs, they’ll require constant monitoring, configuration and management 100 times over. You’ll need to think about configuring VMs according to your business requirements, setting up virtual networks, adjusting for scale and even managing availability and performance. 



Quote for the day:

"Leaders begin with a different question than others. Replacing who can I blame with how am I responsible?" -- Orrin Woodward

Daily Tech Digest - April 18, 2021

How Can Financial Institutions Prepare for AI Risks?

In exploring the potential risks of AI, the paper provided “a standardized practical categorization” of risks related to data, AI and machine learning attacks, testing, trust, and compliance. Robust governance frameworks must focus on definitions, inventory, policies and standards, and controls, the authors noted. Those governance approaches must also address the potential for AI to present privacy issues and potentially discriminatory or unfair outcomes “if not implemented with appropriate care.” In designing their AI governance mechanisms, financial institutions must begin by identifying the settings where AI cannot replace humans. “Unlike humans, AI systems lack the judgment and context for many of the environments in which they are deployed,” the paper stated. “In most cases, it is not possible to train the AI system on all possible scenarios and data.” Hurdles such as the “lack of context, judgment, and overall learning limitations” would inform approaches to risk mitigation, the authors added. Poor data quality and the potential for machine learning/AI attacks are other risks financial institutions must factor in.


How to turn everyday stress into ‘optimal stress’

What triggers a stress response in one person may hardly register with another. Some people feel stressed and become aggressive, while others withdraw. Likewise, our methods of recovery are also unique—riding a bike, for instance, versus reading a book. Executives, however, aren’t usually aware of their stress-related patterns and idiosyncrasies and often don’t realize the extent of the stress burden they are already carrying. Leadership stereotypes don’t help with this. It’s no surprise that we can’t articulate how stress affects us when we equate success with pushing boundaries to excess, fighting through problems, and never admitting weakness. Many people we know can speak in detail about a favorite vacation but get tongue-tied when asked what interactions consistently trigger stress for them, or what time of day they feel most energized. To reach optimal stress, we need to be conscious of our stress; in neurological terms, it’s the first step toward lasting behavior change. As the psychiatrist and author Daniel Siegel writes, “Where attention goes, neural firing flows and neural connection grows.”9 And it is these newly grown neurological pathways that define our behavior and result in new habits.


How to Empower Transformation and Create ROI with Intelligent Automation

CIOs see ROI delivered in multiple ways. For example, a recent Forrester study identified that Bizagi’s platform offered 288% financial returns. CIOs seek benefits other than cost savings, such as increased net promoter scores, realized upsell opportunities, and improved end-user productivity gains. ... Only that automation sets a very high bar on what machines can perform reliably, especially when employees often interpret automation to mean “without any human involvement.” For example, you can automate many steps in a loan application and its approval processes when the applicant checks all the right boxes. However, most financial transactions have complex exceptions and actions that require orchestration across multiple systems. Managers and employees know the daily complications and oversimplifying their jobs with only rudimentary automations often leads to a backlash from vocal detractors. That’s why CIOs and IT leaders need more than simple task automation, departmental applications, or one-off data analysis. Digital leaders recognize the importance of intelligence and orchestration to modernize workflows, meet customer expectations, leverage machine learning capabilities, and enable implementing of the required business rules.


Understand Bayes’ Theorem Through Visualization

Before going to any definition, normally Bayes’ Theorem are used when we have a hypothesis and we have observed some evidence and we would like to know the probability of the hypothesis holds given that the said evidence is true. Now it may sound a bit confusing, let’s use the above visualization for a better explanation. In the example, we want to know the probability of selecting the female engineer given who has finished Ph.D. education. The first thing we need is the probability of selecting the female engineer from the population without considering any evidence. The term P(H) is called “prior”. ... As we know Bayes’ theorem is branching from Bayesian statistics, which relies on subjective probabilities and uses Bayes’ theorem to update the knowledge and beliefs regarding the events and quantities of interest based on data. Hence, based on some knowledge, we can draw some initial inferences on the system (“prior” in Bayes) and then “update” these inferences based on the data and new data to obtain the “posterior”. Moreover, there are terms like Bayesian inference and frequentist statistical inference, which is not covered in this article. 


Leveraging Geolocation Data for Machine Learning: Essential Techniques

Fortunately, we don’t have to worry about parsing these different formats and manipulating low-level data structures. We can use the wonderful GeoPandas library in Python that makes all this very easy for us. It is built on top of Pandas, so all of the powerful features of Pandas are already available to you. It works with GeoDataFrames and GeoSeries which are “spatially-aware” versions of Pandas DataFrames and Series objects. It provides a number of additional methods and attributes that can be used to operate on geodata within a DataFrame. A GeoDataFrame is nothing but a regular Pandas DataFrame with an extra ‘geometry’ column for every row that captures the location data. Geopandas can also conveniently load geospatial data from all of these different geo file formats into a GeoDataFrame with a single command. We can perform operations on this GeoDataFrame in the same way regardless of the source format. This abstracts away all of the differences between these formats and their data structures.


Why Probability Theory is Hard

First, probability theorists don’t even agree what probability is or how to think about it. While there is broad consensus about certain classes of problems involving coins, dice, coloured balls in perfectly mixed bags and lottery tickets, as soon as we move into practical probability problems with more vaguely defined spaces of outcome, we are served with an ontological omelette of frequentism, Bayesianism, Kolmogorov axioms, Cox’s theory, subjective, objective, outcome spaces and propositional credences. Even if the probationary probability theorist is eventually indoctrinated (by choice or by accident of course instructor) into one or other school, none of these frameworks is conceptually easy to access. Small wonder that so much probabilistic pedagogy is boiled down to methodological rote learning and rules of thumb. There’s more. Probability theory is often not taught very well. The notation can be confusing; and don’t get me started on measure theory. The good news is that in terms of practical applications, very little can get you a very long way. 


Open-source, cloud-native projects: 5 key questions to assess risk

Another important indicator of risk relates to who owns or controls an open-source project. From a risk perspective, projects with neutral governance, where decisions are made by people from a variety of different companies, present a lower risk. The lowest-risk projects are ones that fall under vendor-neutral foundations. Kubernetes has been successful in part because it is shepherded by the Cloud Native Computing Foundation (CNCF). Putting Kubernetes into a neutral foundation provided a level playing field where people from different companies could work together as equals, to create something that benefits the entire ecosystem. The CNCF focuses on helping cloud-native projects set themselves up to be successful with resource documents, maintainer sessions, and help with various administrative tasks. In contrast, open-source projects controlled by a single company have higher risk because they operate at the whims of that company. Outside contributors have little recourse if that company decides to go in a direction that doesn't align with the expectations of the community's other participants. This can manifest as licensing changes, forks, or other governance issues within a project.


Interpreted vs. compiled languages: What's the difference?

In contrast to compiled languages, interpreted languages generate an intermediary instruction set that is not recognizable as source code. The intermediary is not architecture specific as machine code, either. The Java language calls this intermediary form bytecode. This intermediary deployment artifact is platform agnostic, which means it can run anywhere. But one caveat is that each runtime environment needs to have a preinstalled interpreter. The interpreter converts the intermediary code into machine code at runtime. The Java virtual machine (JVM) is the required interpreter that must be installed in any target environment in order for applications packaged and deployed as bytecode to run. The benefit of applications built with an interpreted language is that they can run on any environment. In fact, one of the mantras of the Java language when it was first released was "write once, run anywhere," as Java apps were not tied to any one OS or architecture. The drawback to an interpreted language is that the interpretation step consumes additional clock cycles, especially in comparison to applications packaged and deployed as machine code. 


Disrupting the disruptors: Business building for banks

The strategic target of a new build should be nothing less than radical disruption. Banks should aim not only to expand their own core offerings but also to create a unique combination of products and functionality that will disrupt the market. Successful new launches come with a clear sense of mission and direction, as well as a road map to profitability (see sidebar “Successful business builders are realistic about the journey”). One regional digital attacker in Asia targeted merchant acquiring and developed a network with more than 700,000 merchants. In just four months, it created a product with the capacity to process payments through QR codes at the point-of-sale systems of the two main merchant acquirers in the region and to transfer money between personal accounts. In another case, an incumbent bank launched a state-of-the-art digital solution in just ten months. In China, a leading global bank launched a digital-hybrid business that focuses on financial planning and uses social media to connect with customers. A midsize Asian bank, meanwhile, launched an ecosystem of services for the digital-savvy mass and mass-affluent segment, aimed at making it easier for customers to manage their financial lives.


9 Trends That Are Influencing the Adoption of Devops and Devsecops

Despite the challenges of adopting these approaches, the potential gains to be made are generally seen as justifying this risk. For most development teams, this will first mean moving to a DevOps process, and then later evolving DevOps into DevSecOps. Beyond the operational gains that can be made during this transition lie a number of other advantages. One of the often overlooked effects of just how widespread DevOps has become is that, for many developers, it has become the default way of working. According to open source contributor and DevOps expert Barbara Ericson of Cloud Defense, “DevOps has suddenly become so ubiquitous in software engineering circles that you’ll be forgiven if you failed to realize the term didn’t exist until 2009...DevOps extends beyond the tools and best practices needed to accomplish its implementation. The successful introduction of DevOps demands a change in culture and mindset.” This trend is only likely to continue in the future, and could make it difficult for firms to hire talented developers if they are lagging behind on their own transition to DevOps.



Quote for the day:

"Leadership is about being a servant first." -- Allen West