Take the 2-minute tour ×
Code Review Stack Exchange is a question and answer site for peer programmer code reviews. It's 100% free, no registration required.

I have client's methods that processes a large SQL query. I wouldn't like to do that through getting whole query result to memory, but process it in batches. It entails with exposing JDBC API to client's code. I think that this code could be refactored much better.

    //client's method
    m_jdbcManager = JdbcManagerUtil.getInstance();
    m_tupleManager = DbTupleManager.getInstance();
    Connection connection = null;
    PreparedStatement statement = null;
    ResultSet resultSet = null;
    try {
        connection = m_jdbcManager.getConnection();
        connection.setAutoCommit(false);
        statement = connection
                .prepareStatement(Statements.SELECT_RIGHT_TUPLES);
        statement.setInt(1, id);
        statement.setFetchSize(JdbcManagerUtil.FETCH_SIZE);
        resultSet = statement.executeQuery();
        while (resultSet.next()) {
            Tuple tuple = (Tuple) m_tupleManager
                    .readObject(resultSet);
            //process tuple, invokes methods from client class 
        }
        connection.setAutoCommit(true);
        JdbcManagerUtil.closeEverything(connection, statement, resultSet); 
    } catch (<exceptions>) { 
    }

I tried to expose method that returns ResultSet, but after processing, there must be released resources Connection, PreparedStatement and ResultSet. JDBC 4 and PostreSQL database.

share|improve this question
    
You should update your question and tell us which JDBC Client/server you are using.... –  rolfl Jan 17 '14 at 12:32
    
setAutoCommit(true) and closeEverything needs to be in a finally block. Outherwise, you may leak resources or break down codes that rely on autoCommit being true. –  abuzittin gillifirca Jan 17 '14 at 15:23

1 Answer 1

Your assumption that the entire dataset is read in to memory is not necessarily accurate. The JDBC clients I have worked with (DB2, MS-SQLServer, Sybase, Oracle, etc.) each default to, or have an option to limit the size of the client-side buffer. You only have a small amount of data on the actual client, and the ResultSet process fetches more data when you iterate through it.

There is no need to add another level of buffering. Are you sure the data is all being returned? Have you inspected the configuration options for your JDBC Client?

share|improve this answer
    
Thanks, you're right. I mean if I create method List<Object> getTuples(); in my JdbcManager class that manages queries, then memory problem will occur. So I would like to process row by row. But then it is difficult to hide implementation of getting each row. –  gadon Jan 17 '14 at 12:49
    
You can return an iterator. Don't forget to close resources. (I would guess after the last next() but you might want to check again in finalize to be sure) –  abuzittin gillifirca Jan 17 '14 at 15:32
    
Or return a interface ClosableIterator<T> extends Iterator<T>, Closeable {}, and let the caller do the closeing. –  abuzittin gillifirca Jan 17 '14 at 15:39

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.