The metadata attribute aspect of this is covered by other bugs:
> There are now about 10000 tests. By default, they are sorted in the
> repo by source(Sputnik | ietestcenter)/chapter/section/subsection/...
> Even if it certainly makes sense from a test suite producer point of
> view (and directory choices has to be made anyway), it doesn't cover all search use cases.
> In order to "identify test holes" more efficiently as suggested under
> "Community Contributions" it would be helpful to have some tool to
> search for tests. For instance, it's currently hard to find if a test has been forgotten regarding:
> * native prototype objects
> For instance, in 15.4.2.2 is written "The [[Prototype]] internal
> property of the newly constructed object is set to the original Array
> prototype object, the one that is the initial value of Array.prototype".
> There is clearly a test to write based on that to make sure that
> Monkey-patched Array.prototype aren't used. Such a pattern can be
> found for functions (15.3.3, 15.3.4), objects (15.2.2.1 step 4) and certainly in other places.
> * length properties
> * Errors (make sure that code that should throw errors have a test for
> them)
> * strict mode (I assume it's not going to be covered in a chapter)
> * Any other cross-chapter topic you could think of.
> Several different approaches could be taken to tackle that issue:
> -- text-search
> -- tags search
Quoted the wrong email section:
> - Run partial test suite
> Running the test suite is currently long. I'm currently reviewing FF4
> fails. When I close my web browser, I have to re-run all tests.
> However, most of the time, I don't care about most of them. I just
> want all those that I haven't reviewed yet. I wish the website could offer an easy way to run for instance:
> - all chapter 15.2 tests (or any chapter/section/subsection obviously)
> - all tests found through a search (if/when some search feature will
> be
> available)
> - all tests that has been added or changed between two test suite
> versions. If I have reviewed all tests of version x, I may not care
> about them when the x+1 version comes out.
A wild guess is two weeks would be needed to implement this.