We’ve upgraded the program as well as the name! Please check out our Trove page to learn more.
You have invested lots of time and money in testing your product and creating reams of data about the process, the product and the machines that are used to test it. Trove is an integral part of making that data investment pay off by making the data useful. The first step in making data useful is making it accessible. The first step to making it accessible is to manage the constant data flow from test machines to a central database. Whether in laboratory automation or on the production floor, Trove is used to retrieve, archive and make available data from an automated data acquisition or test. Once it is available, Trove’s batch processing and trending can enable collaboration between engineers, managers and the quality team to explore and improve process and product alike.
Fighting the Right Fires
It is no secret that using your data to make better decisions is the right thing to do. Yet many manufacturers are still not doing it – why? The simple reality is that it takes a concerted effort to step back from fighting fires on a daily basis to work on these goals. Leveraging Signal.X and our software products such as Trove can be a key part of this effort, making that effort pay off quickly, winning small victories to build momentum and enabling you and your team to fight the right fires.
Data Management and Collaboration
Trove has several primary components as part of the overall application:
- Trove starts by scanning sources of test data for data files and test results, then it pulls that data to a central server. Information about the test, results, metadata and metrics are entered into a database.
- Once the information is in the database, Trove’s query engine is used to filter the data for specific reports or jobs. Any metadata can be used to narrow the returned entries (e.g. return all tests that failed on test stand #2 last month). No cryptic SQL query statements here; we have designed the user interface for someone who is not a database administrator.
- Filtered data sets can then be used for reporting or reprocessing. Reports can be generated on the data itself (statistics about the metric values over time) or summarized as a process capability (SPC charting, sorting by top n failures, sorting by shift or day, etc.).
- All of this functionality can then be automated as a job. Jobs can be run on a periodic basis for daily, weekly and/or monthly reports and summaries of production quality. Reports can be alarmed to notify users of this information if values exceed thresholds.
All of this functionality sets up an environment where all of the consumers of data can work from the same data set, collaborating on key initiatives while maintaining their own individual data processing for their roles.