Daily Tech Digest - August 21, 2022

Using AI to Automate, Orchestrate, and Accelerate Fraud Prevention

Traditional approaches to fraud prevention and response no longer measure up. First of all, they’re reactive, rather than proactive, focused on damage that’s already taken place rather than anticipating, and potentially preventing, the threats of the future. The limitations of this approach play out in commercial off-the-shelf tools that organizations can’t easily modify to new developments in the landscape. Even the most cutting-edge AI solutions may be limited in detecting new types of fraud schemes, having only been trained on known categories. Secondly, today’s siloed operations impede progress. Cybersecurity teams and fraud teams, the two groups on the frontlines of the fight, too often work with different tools, workflows, and intelligence sources. These silos extend across the various stages of the fraud-fighting lifecycle: threat hunting, monitoring, analysis, investigation, response, and more. Individual tools address only discrete parts of the process, rather than the full continuum, leaving much to fall within the gaps. When one team notices something suspicious, the full organization might not know about the threat and act upon it until it’s too late.


Fundamentals of AI Ethics

One of the biggest challenges in AI, bias can stem from several sources. The data used for training AI models might reflect real societal inequalities, or the AI developers themselves might have conscious or unconscious feelings about gender, race, age, and more that can wind up in ML algorithms. Discriminatory decisions can ensue, such as when Amazon’s recruiting software penalized applications that included the word “women,” or when a health care risk prediction algorithm exhibited a racial bias that affected 200 million hospital patients. To combat AI bias, AI-powered enterprises are incorporating bias-detecting features into AI programming, investing in bias research, and making efforts to ensure that the training data used for AI and the teams that develop it are diverse. Gartner predicts that by 2023, “all personnel hired for AI development and training work will have to demonstrate expertise in responsible AI.” Continually monitoring, analyzing, and improving ML algorithms using a human-in-the-loop (HITL) approach – where humans and machines work together, rather than separately – can also help reduce AI bias. 


10 nonfunctional requirements to consider in your enterprise architecture

Scalability refers to the systems' ability to perform and operate as the number of users or requests increases. It is achievable with horizontal or vertical scaling of the machine or attaching AutoScalingGroup capabilities. Here are three areas to consider when architecting scalability into your system:Traffic pattern: Understand the system's traffic pattern. It's not cost-efficient to spawn as many machines as possible due to underutilization. Here are three sample patterns:Diurnal: Traffic increases in the morning and decreases in the evening for a particular region. Global/regional: Heavy usage of the application in a particular region. Thundering herd: Many users request resources, but only a few machines are available to serve the burst of traffic. This could occur during peak times or in densely populated areas. Elasticity: This relates to the ability to quickly spawn a few machines to handle the burst of traffic and gracefully shrink when the demand is reduced. Latency: This is the system's ability to serve a request as quickly as possible. 

When we might meet the first intelligent machines

A few weeks later, Yann LeCun, the chief scientist at Meta’s artificial intelligence (AI) Lab and winner of the 2018 Turing Award, released a paper titled “A Path Towards Autonomous Machine Intelligence.” He shares in the paper an architecture that goes beyond consciousness and sentience to propose a pathway to programming an AI with the ability to reason and plan like humans. Researchers call this artificial general intelligence or AGI. I think we will come to regard LeCun’s paper with the same reverence that we reserve today for Alan Turing’s 1936 paper that described the architecture for the modern digital computer. Here’s why. ... LeCun’s first breakthrough is in imagining a way past the limitations of today’s specialized AIs with his concept of a “world model.” This is made possible in part by the invention of a hierarchical architecture for predictive models that learn to represent the world at multiple levels of abstraction and over multiple time scales. With this world model, we can predict possible future states by simulating action sequences. In the paper, he notes, “This may enable reasoning by analogy, by applying the model configured for one situation to another situation.”


Why DevOps Governance is Crucial to Enable Developer Velocity

One key takeaway from all this: consolidation of application descriptors enables efficiencies via modularization and reuse of tested and proven elements. This way the DevOps team can respond quickly to the dev team needs in a way that is scalable and repeatable. Some potential anti-patterns include: Developers are throwing their application environment change needs over the fence via the ticketing system to the DevOps team causing the relationship to worsen. Leaders should implement safeguards to detect this scenario in advance and then consider the appropriate response. An infrastructure control plane, in many cases, can provide the capabilities to discover and subsume the underlying IaC files and detect any code drift between the environments. Automating this process can alleviate much of the friction between developers and DevOps teams. Developers are taking things into their own hands resulting in an increased number of changes in local IaC files and an associated loss of control. Mistakes happen, things stop working, and finger pointing ensues. 


The Role of ML and AI in DevOps Transformation

DevOps is changing fundamentally as a result of AI and ML. Change in security is most notable because it acknowledges the need for complete protection that is intelligent by design (DevSecOps). Many of us believe that shortening the software development life cycle is the next critical step in the process of ensuring the secure delivery of integrated systems via Continuous Integration & Continuous Delivery (CI/CD). DevOps is a business-driven method for delivering software, and AI is the technology that may be integrated into the system for improved functioning; they are mutually dependent. With AI, DevOps teams can test, code, release, and monitor software more effectively. Additionally, AI can enhance automation, swiftly locate and fix problems, and enhance teamwork. AI has the potential to increase DevOps productivity significantly. It can improve performance by facilitating rapid development and operation cycles and providing an engaging user experience for these features. Machine Learning technologies can make data collection from multiple DevOps system components simpler.


Data Lakes Are Dead: Evolving Your Company’s Data Architecture

Changing your data architecture starts with recognizing that the process spans beyond IT – it’s a company-wide shift. Data literacy and culture are fundamental components of launching or changing data architecture. This shift begins with defining your business goals and value chain. What business problem do you want to solve, and how can your data be optimized to accomplish that goal? Different data architecture offers diverse possibilities for conducting analytics, none of which are inherently better than another. Having a company-wide understanding of where you are and where you’re going helps guide what you should be getting out of your data and what architecture would best serve those needs at each level of your organization. Once you’ve identified how to manage your data better to serve your organization, you need to establish overarching data governance. Again, data governance is not a set of procedures for IT, but a company-wide culture. An impactful data culture involves a carefully curated ecosystem of roles, responsibilities, tools, systems, and procedures. 


7 benefits of using design review in your agile architecture practices

The things involved in a design review include:The designer is the person who wants to solve a problem. The documentation is the document at the center of attention. It contains information regarding all aspects of the problem and the proposed solution. The reviewer is the person who will review the documentation. The process includes the agreed-upon rules and interactions that define the designer's and reviewer's communications. It may stand alone or be part of a bigger process. For example, in a software development life cycle, it could precede development, or in an API specification, it could include evaluating changes. The review scope is the area the reviewer tries to cover when reviewing the documentation (technical or not). ... Design review has clear value that far outweighs the overhead it introduces, much like code review does in software releases. Organizations should consider it part of their governance model in conjunction with other tools and practices, including architecture review boards. 


Enterprise Architecture Governance – Why It Is Important

The Enterprise Architecture organization helps to develop and enable the adoption of design, review, execution and governance capabilities around EA. EA guidance and governance over the Enterprise IT solutions delivery processes focused on realizing a number of solutions characteristics. These include:Standardization: Development and promotion of enterprise-wide IT standards. Consistency: Enable required levels of information, process and applications integration and interoperability. Reuse: Strategies and enabling capabilities that enable reuse and advantage of IT assets at the design, implementation and portfolio levels. This could include both process/governance and asset repository considerations. Quality: Delivering solutions that meet business functional and technical requirements, with a lifecycle management process that ensure solutions quality. Cost-effectiveness and efficiency: Enabling consistent advantage of standards, reuse and quality through repeatable decision governance processes enabling reduced levels of total solutions lifecycle cost, and enabling better realization on IT investments.


How Blockchain Checks Financial Frauds within Companies

Blockchains are made to be resistant to data modification by design. A blockchain can effectively function as an open, distributed ledger that can efficiently and permanently record transactions between two parties. Blockchain can also be used to verify transactions that have been reported. Using the technology, auditors could simply confirm the transactions on readily accessible blockchain ledgers rather than requesting bank statements from clients or contacting third parties for confirmation. The blockchain technology achieves this immutability by matching cryptography with blockchain. Each transaction that the blockchain network deems valid is time-stamped, embedded into a ‘block’ of data, and cryptographically secured by a hashing operation that links to and integrates the hash of the previous block. This new transaction then joins the chain as the following chronological update. Meta-data from the hash output of the previous block is always incorporated into the hashing process of a new block. 



Quote for the day:

"Leaders make decisions that create the future they desire." -- Mike Murdock

No comments:

Post a Comment