Take the tour ×
Programmers Stack Exchange is a question and answer site for professional programmers interested in conceptual questions about software development. It's 100% free, no registration required.

Just how efficient is "good enough" for all intents and purposes? I wrote a script to simply list off all numbers that divide into an input, x, as pairs (i, n//i) and was just curious how efficient I should be going for? what is the acceptable rate at which the script starts to lose its efficiency? This is my code (although advice is appreciated, I just want to give an idea as to how it works).

import time
print('''This program determines all basic factors of a given number, x, as ordered            pairs, (a, b) where ab=x.
Type "quit" to exit.''')

timer = input('Would you like a timer? (y/n) ')
while  1:
    try:
        x =(input('x = '))                                               
        T0 = time.time()
        b = []
        n = int(x)**0.5
        ran = list(range(1, int(n)+1))
        if int(x) % 2 == 1:                                          
            ran = ran[::2]
        for i in ran:
            if int(x) % i == 0:
                m = (i, int(x)//i)
                b.append(m)
        else:
            pass
    except ValueError as error_1:
        if x == 'quit':
            break
        else:
            print(error_1)
    except EOFError as error_2:
        print(error_2)
    except OverflowError as error_3:
        print(error_3)
    except MemoryError as error_4:
        print(error_4)
    T1 = time.time()
    total = T1-T0
    print(b)
    print(str(len(b)) + ' pairs.')
    if timer == 'y':
    print("%.5f" % total + ' seconds.')

some of the results are:

x = 9
[(1, 9), (3, 3)]
2 pairs.
0.00000 seconds.

x = 8234324543
[(1, 8234324543)]
1 pairs.
0.07404 seconds.

x = 438756349875
[(1, 438756349875), (3, 146252116625), (5, 87751269975), (15, 29250423325), (25,         17550253995), (75, 5850084665), (125, 3510050799), (375, 1170016933), (557, 787713375),   (1671, 262571125), (2785, 157542675), (8355, 52514225), (13925, 31508535), (41775,   10502845), (69625, 6301707), (208875, 2100569)]
16 pairs.
0.88859 seconds.

So this program can be pretty quick, but then the speed drops rather rapidly once you get up to higher numbers. Anyways, ya, I was just wondering what is considered acceptable by todays standards?

share|improve this question
4  
Whatever the requirements state as good enough. –  Rig Aug 18 at 21:09
 
That depends on how much time and effort you want / have to spend on improving performance, replacing data structures and algorithms or replacing Python with C, for example. –  Martijn Pieters Aug 18 at 22:00
 
And if you want to compare relative performance of different approaches, you should use the timeit module. –  Martijn Pieters Aug 18 at 22:01
1  
You are doing integer factorization, and there are many algorithms for that. The one you use (trial divisions) is the simplest, and also the worst, but it may be enough if your numbers are not too large. Otherwise, you may switch to another algorithm, or use a library, or even call an external program (I once had a factoring function in Python that just called yafu). But Rig's answer is right: that depends on yours requirements/needs only. –  arbautjc Aug 19 at 7:10
add comment

2 Answers

So this program can be pretty quick, but then the speed drops rather rapidly once you get up to higher numbers

This is normal. Doing the factorization of a large number is an highly time consuming tasks. Indeed many cryptography algorithms (like the widely used RSA) rely on the difficulty of factoring large integers.

share|improve this answer
add comment

There is no absolute answer - performance is relative. Ultimately, the answer depends on the needs of the users.

If you're only ever factoring small numbers, and assuming that you aren't trying to do that millions of times a second, speed doesn't have to be very good, especially if they are typing numbers into a console. Anything under a second is acceptable. If they are using a GUI, response times should generally be under 100ms or so to give the feeling that the results are "immediate".

If you're using the code to crunch millions of numbers, or numbers with hundreds of digits, performance is, of course, more important. However, if you're doing that crunching over night, speed becomes less of a factor, assuming the work gets done before people need it.

share|improve this answer
add comment

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.