Evaluation

Participants in any one or multiple task are welcome. Performance evaluation and system ranking will be based on single task. Each task will be evaluated and ranked separately. The winner will be ranked based on the identification accuracies. The system which performs best for individual tasks will be declared the winner. The test dataset is a closed dataset and will be made available once the participants submit their systems for evaluation. Accuracy will be calculated as follows,

Accuracy = CC/(GT ) ×100


Where,
CC is the number of correctly identified sample; GT is the ground truth or total number of samples.

The participants have to submit their systems / executables either in Windows (XP or Win 7) executable format or Linux/Unix. The command lines of the system accepted for the competition are as follows,

  1. Task 1 (Tri-Scripts identification):

    ClassifyEngHin<3rdScriptsName> testsample.jpg Task1Result.txt

    Where,<3rdScriptName> is one of the acronym from {Ben, Ori, Guj, Pun, Kan, Tam, Tel, Arb}

    Task1Result.txt is a single file which should have the following format and saves the identification result for all the samples in the test dataset:

    [testSample name] | [Identified script]


    e.g.    Test1.jpg|Eng
              Test2.jpg|Ben , etc


  2. Task 2 (North Indian Scripts identification):

    ClassifyNorthIndianScripts testsample.jpg Task2Result.txt


  3. Task 3 (South Indian Scripts identification):

    ClassifySouthIndianScripts testsample.jpg Task3Result.txt


  4. Task 4 (All Scripts identification):

    ClassifyAllScripts testsample.jpg Task4Result.txt