Tuesday, January 14, 2020

storage as a service

Storage as a service is a managed service in which the provider supplies the customer with access to a data storage platform. The service can be delivered on premises from infrastructure that is dedicated to a single customer, or it can be delivered from the public cloud as a shared service that's purchased by subscription and is billed according to one or more usage metrics.
STaaS customers access individual storage services through standard system interface protocols or application programming interfaces (APIs). Typical offerings include bare-metal storage capacity; raw storage volumes; network file systems; storage objects; and storage applications that support file sharing and backup lifecycle management.

Storage as a service was originally seen as a cost-effective way for small and midsize businesses that lacked the technical personnel and capital budget to implement and maintain their own storage infrastructure. Continue reading...

process mining software

process mining software
Process mining software is a type of programming that analyzes data in enterprise application event logs in order to learn how business processes are actually working. Process mining software provides the transparency administrators need to identify bottlenecks and other areas of inefficiency so they can be improved.
When the software is used to analyze the transaction or audit logs of a workflow management system, data visualization components in the software can show administrators what processes are running at any given time. Some process mining apps also allow users to drill down to view the individual documents associated with a process.
If a process model doesn't already exist, the software will perform business process discovery to create one automatically, sometimes with the aid of artificial intelligence and machine learning. If there already is a model, the process mining software will compare it to the event log to identify discrepancies and their possible causes.
Use cases

Process mining software is especially useful for optimizing workflows in process-oriented disciplines such as business process reengineering (BPR). The technology is often applied to the most common and complex business processes executed in most organizations, such as order to cash, accounts payable and supply chain management.

An organization might use process mining software to find the cause of unexpected delays in invoice processing, for example, by examining the logs of the accounts payable module in an ERP system. Process mining software can analyze millions of transaction records and spot deviations from normal workflows that might indicate increased risk. Analysis of an audit log can also spot deviations from important regulations, such as the U.S. Sarbanes-Oxley Act (SOX) rules for archiving business records or HIPAA requirements for protecting medical records.

white box testing

white box testing
White box testing is a software testing methodology that uses source code as the basis for designing tests and test cases. White box testing takes an inward look at the framework and components of a software application to check the internal structure and design of the software. White box testing is also called transparent, clear and glass box testing for this reason. This test type can also be applied to unit, system and integration testing.
White box testing usually involves tracing possible execution paths through the code and working out what input values would force the execution of those paths. The tester, who is usually the developer that wrote the code, will verify the code according to its design- which is why familiarity with the code is important for the one initiating the test. Code is tested by running input values through the code to determine if the output is what should be expected. Testers can work out the smallest number of paths necessary to test, or "cover," all the code. Static analysis tools will aid in the same job, more quickly and more reliably.
White box testing, on its own, cannot identify problems caused by mismatches between the actual requirements or specification and the code as implemented, but it can help identify some types of design weaknesses in the code. Examples include control flow problems (e.g. closed or infinite loops or unreachable code) and data flow problems. Static code analysis (by a tool) may also find these sorts of problems but does not help the tester/developer understand the code to the same degree that personally designing white-box test cases does. Tools to help in white box testing include Veracode's white box testing tools, Googletest, Junit and RCUNIT.
Advantages & disadvantages
Advantages to white box testing include:
  • Thorough testing.
  • Supports automated testing.
  • Tests and test scripts can be re-used.
  • Testing is supported at earlier development stages.
  • Optimizes code by removing any unnecessary code.
  • Aids in finding errors or weaknesses in the code.
Disadvantages include:
  • Test cases are often unrepresentative of how the component will be used.
  • White box testing is often time consuming, complex and expensive.
  • Testers with internal knowledge of the software are needed.
  • If the software is implemented on frequently, then time and cost required
White box testing vs black box testing
Black box testing is the opposing form of testing compared to white box testing. The implication is that you cannot see the inner workings of a black-painted box; and in fact, you do not need to. Black box testing will design test cases to cover all requirements specified for the component, then use a code coverage monitor to record how much of the code is executed when the test cases are run. Unlike white box testing, black box tests do not need developers who worked on the code. All the testers need to be familiar with are the software's functions. Other differences between black and white box testing are that black box testing is based on requirement specifications rather than design and structure specifications. Additionally, black box testing can be applied to system and acceptance tests rather than unit, system and integration tests.
The term "gray box testing" is used for internal software structures that are not actually source code - for example, class hierarchies or module call trees.

Internet of Vehicles

The Internet of Vehicles (IoV) is a distributed network that supports the use of data created by connected cars and vehicular ad hoc networks (VANETs). An important goal of the IoV is to allow vehicles to communicate in real time with their human drivers, pedestrians, other vehicles, roadside infrastructure and fleet management systems.
The IoV supports five types of network communication:

Intra-Vehicle systems that monitor the vehicle's internal performance through On Board Units (OBUs).

Vehicle to Vehicle (V2V) systems that support the wireless exchange of information about the speed and position of surrounding vehicles.

Vehicle to Infrastructure (V2I) systems that support the wireless exchange of information between a vehicle and supporting roadside units (RSUs).

Vehicle to Cloud (V2C) systems that allow the vehicle to access additional information from the internet through application program interfaces (APIs).

Vehicle to Pedestrian (V2P) systems that support awareness of Vulnerable Road Users (VRUs) such as pedestrians and cyclists.

When discussed in the context of 5G and intelligent transport systems (ITS), the five types of networks mentioned above are sometimes referred to as Vehicle to Everything (V2X) communication.

Future of the IoV

According to a recent report by Allied Market Research, the global IoV market is expected to be over $200 billion by 2024 and several auto manufacturers, including BMW and Daimler, have announced programs to develop a platform that will connect IoV services like route management and smart parking with onboard infotainment centers.

Information technology (IT) vendors that are currently working with manufacturers and governing organizations to help build the Internet of Vehicles include Apple, Cisco, Google, IBM, Intel, Microsoft and SAP.

Microsoft SCOM (System Center Operations Manager)

Reference :

WhatIs.com WhatIs@lists.techtarget.com

AIOps (artificial intelligence for IT operations)

Artificial intelligence for IT operations (AIOps) is an umbrella term for the use of big data analytics, machine learning (ML) and other artificial intelligence (AI) technologies to automate the identification and resolution of common information technology (IT) issues. The systems, services and applications in a large enterprise produce immense volumes of log and performance data. AIOps uses this data to monitor assets and gain visibility into dependencies without and outside of IT systems.

An AIOps platform should bring three capabilities to the enterprise:
  1. Automate routine practices.
Routine practices include user requests as well as non-critical IT system alerts. For example, AIOps can enable a help desk system to process and fulfill a user request to provision a resource automatically. AIOps platforms can also evaluate an alert and determine that it does not require action because the relevant metrics and supporting data available are within normal parameters.
  1. Recognize serious issues faster and with greater accuracy than humans.
IT professionals might address a known malware event on a noncritical system, but ignore an unusual download or process starting on a critical server because they are not watching for this threat. AIOps addresses this scenario differently, prioritizing the event on the critical system as a possible attack or infection because the behavior is out of the norm, and deprioritizing the known malware event by running an antimalware function.
  1. Streamline the interactions between data center groups and teams.
AIOps provides each functional IT group with relevant data and perspectives. Without AI-enabled operations, teams must share, parse and process information by meeting or manually sending around data. AIOps should learn what analysis and monitoring data to show each group or team from the large pool of resource metrics.

Use Cases

AIOps is generally used in companies that use DevOps or cloud computing and in large, complex enterprises. AIOps aids teams that use a DevOps model by giving development teams additional insight into their IT environment, which then gives the operations teams more visibility into changes in production. AIOps will also remove a lot of risks involved in hybrid cloud platforms by aiding operators across their IT infrastructure. In many cases, AIOps can help any large company that has an extensive IT environment. Being able to automate processes, recognize problems in an IT environment earlier and aid in smoothing communications between teams will help a majority of large companies with extensive or complicated IT environments.  
AIOps Explained
Learn what AIOps is, the various technologies that underpin it, and the benefits and challenges enterprise IT teams can expect when they implement AIOps platforms.
Loaded2.69%
 
Learn the basics of AIOps

AIOps technologies

AIOps uses a conglomeration of various AI strategies, including data output, aggregation, analytics, algorithms, automation and orchestration, machine learning and visualization. Most of these technologies are reasonably well-defined and mature.
AIOps data comes from log files, metrics and monitoring tools, helpdesk ticketing systems and other sources. Big data technologies aggregate and organize all of the systems' output into a useful form. Analytics techniques can interpret the raw information to create new data and metadata. Analytics reduces noise, which is unneeded or spurious data and also spots trends and patterns that enable the tool to identify and isolate problems, predict capacity demand and handle other events.
Analytics also requires algorithms to codify the organization's IT expertise, business policies and goals. Algorithms allow an AIOps platform to deliver the most desirable actions or outcomes -- algorithms are how the IT personnel prioritize security-related events and teach application performance decisions to the platform. The algorithms form the foundation for machine learning, wherein the platform establishes a baseline of normal behaviors and activities, and can then evolve or create new algorithms as data from the environment changes over time.
Automation is a key underlying technology to make AIOps tools take action. Automated functions occur when triggered by the outcomes of analytics and machine learning. For example, a tool's predictive analytics and ML determine that an application needs more storage, then it initiates an automated process to implement additional storage in increments consistent with algorithmic rules.
Finally, visualization tools deliver human-readable dashboards, reports, graphics and other output so users follow changes and events in the environment. With these visualizations, humans can take action on information that requires decision-making capabilities beyond those of the AIOps software.

Stages of the AIOps process

AIOps benefits and drawbacks

When properly implemented and trained, an AIOps platform reduces the time and attention of IT staff spent on mundane, routine, everyday alerts. IT staff teaches AIOps platforms, which then evolve with the help of algorithms and machine learning, recycling knowledge gained over time to further improve the software's behavior and effectiveness. AIOps tools also perform continuous monitoring without a need for sleep. Humans in the IT department focus on serious, complex issues and on initiatives that increase business performance and stability.
AIOps software can observe causal relationships over multiple systems, services and resources, clustering and correlating disparate data sources. Those analytics and machine learning capabilities enable software to perform powerful root cause analysis, which accelerates the troubleshooting and remediation of difficult and unusual issues.
AIOps can improve collaboration and workflow activities between IT groups and between IT and other business units. With tailored reports and dashboards, teams can understand their tasks and requirements quickly, and interface with others without learning everything the other team needs to know.
Although the underlying technologies for AIOps are relatively mature, it is still an early field in terms of combining the technologies for practical use. AIOps is only as good as the data it receives and the algorithms that it is taught. The amount of time and effort needed to implement, maintain and manage an AIOps platform can be substantial. The diversity of available data sources as well as proper data storage, protection and retention are all important factors in AIOps results.
AIOps demands trust in tooling, which can be a gating factor for some businesses. For an AIOps tool to act autonomously, it must follow changes within its target environment accurately, gather and secure data, form correct conclusions based on the available algorithms and machine learning, prioritize actions properly and take the appropriate automated actions to match business priorities and objectives.

Implementing AIOps and AIOps vendors

To demonstrate value and mitigate risk from AIOps deployment, introduce the technology in small, carefully orchestrated phases. Decide on the appropriate hosting model for the tool, such as on-site or as a service. IT staff must understand and then train the system to suit needs, and to do so must have ample data from the systems under its watch.