I think that JMH could be useful here. You can run the benchmark similar to the one bellow and try to identify your bottlenecks.
Note that this is in Mode.SingleShotTime, and so emulates the scenarion where JIT has little opprotunity to do it's thing.
import org.openjdk.jmh.annotations.*
import java.util.concurrent.TimeUnit
import kotlin.random.Random
//@BenchmarkMode(Mode.AverageTime)
@BenchmarkMode(Mode.SingleShotTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@State(Scope.Benchmark)
open class SplitToInt {
val count = 2 * 1_000_000
lateinit var stringToParse: String
lateinit var tokensToParse: List<String>
@Setup(Level.Invocation)
fun setup() {
stringToParse = (0..count).map { Random.nextInt(0, 100) }.joinToString(separator = " ")
tokensToParse = (0..count).map { Random.nextInt(0, 100) }.map { it.toString() }
}
@Benchmark
open fun split() =
stringToParse.split(" ")
@Benchmark
open fun map_toInt() =
tokensToParse.map { it.toInt() }
@Benchmark
open fun split_map_toInt() =
stringToParse.split(" ").map { it.toInt() }
}
The stats on my machine are:
Benchmark Mode Cnt Score Error Units
SplitToInt.map_toInt ss 48.666 ms/op
SplitToInt.split ss 124.899 ms/op
SplitToInt.split_map_toInt ss 186.981 ms/op
So splitting the string and mapping to list of Ints takes ~ 187 ms. Allowing JIT to warm up (Mode.AverageTime) gives me:
Benchmark Mode Cnt Score Error Units
SplitToInt.map_toInt avgt 5 30.670 ± 6.979 ms/op
SplitToInt.split avgt 5 108.989 ± 23.569 ms/op
SplitToInt.split_map_toInt avgt 5 120.628 ± 27.052 ms/op
Whether this is fast or slow depends on the curmstances but are you sure that the input transformation here is the reason you get TLE?
Edit: If you do think that split(" ").map{ str -> str.toInt() } is too slow, you could replace creating the two lists (one from split and one from map) with a single list by splitting and transforming in one go. I wrote a quick hack off kotlin.text.Regex.split that does that and it is about 20% faster.
If in your use case you need to examine only part of the input, splitToSequence is probably a better option.