This is the perl code which compares the sequence in fasta file & maps the header. Though the code is working well, I still would like to make it more efficient. Since the files I compare has >100000 sequences it is taking huge time despite using hashes. Can you please suggest more efficient way ?
Example:
File1:
>Seq1
ABCDEFGH
>Seq2
RNADIMSEQ
>Seq6
XYZ
File2:
>Seq3
ABCDEFGH
>Seq4
RNADIMSEQ
Output:
>Seq1 >Seq3
>Seq2 >Seq4
>Seq6 Not found
The code: my $start_run = time();
%hash=();
open(out, ">Output.txt");
open(sbjt, "File1.fasta") or die "File not found"; #Bigger file
$count =0;
while(<sbjt>)
{
chomp;
if($_ =~ m/^\w+/)
{
$hash{$previous} = $_ ;
#print "$previous\n";
}
else
{
$previous = $_;
}
$count++ ;
}
close sbjt;
#print "$hash{$previous}";
%dash=();
$previous = undef;
open(query, "File2.fasta") or die "File not found"; #smaller
while(<query>)
{
chomp;
if($_ =~ m/^\w+/)
{
$dash{$previous} = $_ ;
#print "$previous\n";
}
else
{
$previous = $_;
}
}
close query;
foreach $key (keys %dash)
{
foreach $temp (keys %hash)
{
if($hash{$temp} eq $dash{$key})
{
push(@new_array, $temp);
}
next; #added
}
if(scalar @new_array>0)
{
print out "$key\t@new_array\n";
}
else
{
print out "$key\tNot found\n";
}
@new_array=();
}
my $end_run = time();
my $run_time = $end_run - $start_run;
print "Job took $run_time seconds\n";