I have try'd some algorithms but no luck solving this problem.
Lets have a further explanation of the behaviour with an example
we have a string: @"example example"
So if i call rangeOfWordAtIndex:10
on the string.
the result would be: well the word @"example"
at location 9 with a length 7.
It should not give @"example" at index 0 with a length of 7
.
Here is the code i have produced so far:
#define unicode_space 32 // this is correct printed it out from code
@implementation NSString (wordAt)
- (NSRange) rangeOfWordAtIndex:(NSInteger) index
{
NSInteger beginIndex = index;
while(beginIndex > 0 && [self characterAtIndex:beginIndex-1] != unicode_space)
{
beginIndex--;
}
NSInteger endIndex = index;
NSInteger sLenght = [self length];
while (endIndex < sLenght && [self characterAtIndex:endIndex+1] != unicode_space)
{
endIndex++;
}
return NSMakeRange(beginIndex, endIndex - beginIndex);
}
@end
But it just doesn't work. without the +1 and -1 it keeps a space as part of a word.
And with it forgets the first character of the word.
Can someone please give some useful suggestion.
endIndex
condition (which causes anNSRangeException
), this correctly gives the answer{8, 7}
for your example string. Remember that the first index is 0, not 1.NSLinguisticTagger
?