Musterlösungen

Laboraufgabe “Monte-Carlo-Algorithmus zur Annäherung von \(\pi\) – sequenziell”

Durch den Vergleich von System.currentTimeMillis() vor und nach der Berechnung kann gemessen werden, wie lange die Berechnung der Annäherung von \(\pi\) mit TOTAL_CYCLES zufällig gezogenen Punkten dauert. System.currentTimeMillis() liefert die Anzahl der aktuell seit dem 01.01.1970 00:00:00 vergangene Anzahl von Millisekunden.

var now = System.currentTimeMillis();
/* ... Berechnungen ... */
var time = System.currentTimeMillis() - now;

Laboraufgabe “Monte-Carlo-Algorithmus zur Annäherung von \(\pi\) – mit Future”

Erweiterung von InOutTuple

Um zwei InOutTuple-Objekte miteinander zu addieren, wird hier die Methode add zu InOutTuple hinzugefügt. Das Ergebnis ist immer ein neues InOutTuple-Objekt, das die Summe der zwei addierten InOutTuple-Objekte (this und other) repräsentiert.

InOutTuple add(InOutTuple other) {
    return new InOutTuple(in() + other.in(), out() + other.out());
}

Bei dieser Lösung wird immer ein neues InOutTuple-Objekt erzeugt, wenn neue Werte mit add hinzugefügt werden. Dazu wird der Konstruktor von InOutTuple aufgerufen. Er bekommt die Summe der neuen Werte mit den jeweiligen Werten des ursprünglichen Objekts als Parameter übergeben.

Parallelisierung mit Threadpool

Als Threadpool wird ein FixedThreadPool mit folgendem Aufruf erzeugt. Die Größe wird hier der Übersichtlichkeit halber als Konstante PARALLELISM angegeben:

var pool = Executors.newFixedThreadPool(PARALLELISM);

Zum Sammeln der Ergebnisse, die in Form von Future<InOutTuple> vorliegen, wird eine LinkedList verwendet. Sie wird in einer Schleife mit Future´s gefüllt, die durch submit-Aufrufe am FixedThreadPool erzeugt werden.

var futures = new LinkedList<Future<InOutTuple>>();

for (var i = 0; i < PARALLELISM; i++) {
    futures.add(pool.submit(() -> getResultMonteCarloPiDraw(
        TOTAL_CYCLES / PARALLELISM)));
}
pool.shutdown();

Nachdem alle submit-Aufrufe abgesetzt wurden, muss der Pool heruntergefahren werden (shutdown). Die anstehenden Callable’s werden noch zu Ende gebracht, dann werden die Threads des Threadpools beendet. Die Anzahl der Durchläufe in jedem Callable kann jetzt durch PARALLELISM geteilt werden, da genau so viele Berechnungen im Pool durchgeführt werden.

Durch die add-Methode an InOutTuple können die Ergebnisse ganz einfach in einer enhanced for-loop zusammengefasst werden. Das Ergebnis von get ist vom Typ InOutTuple. Deshalb können die Teilergebnisse mit der im ersten Schritt erstellten add-Methode aggregiert werden. Zum Start wird das neutrale Element bezüglich der add-Methode erzeugt: Das ist das InOutTuple mit jeweils dem Wert 0 für in und out.

var result = new InOutTuple(0, 0);
for (var f : futures) {
    result = result.add(f.get());
}

Laboraufgabe “commonPool”

Bei der OpenJDK 64-Bit Server VM (build 23.0.1+-wolfi-r1, mixed mode, sharing) (Java SE 23.0.1 vom 15.10.2024) ergibt sich folgende Ausgabe auf einem Chainguard Wolfi-base 1-r6 Linux System, das auf einem Intel Core Ultra 5 125H läuft (\(\mbox{Parallelität} = \mbox{Anzahl Cores}-1\) ist erfüllt):

# Cores: 18
# Threads: 17

Parallelität 20

# Cores: 18
# Threads: 20
Task-01 submitted by pp.CommonPoolTest.main()
Task-01 started by ForkJoinPool.commonPool-worker-18
Task-02 started by ForkJoinPool.commonPool-worker-16
Task-02 submitted by pp.CommonPoolTest.main()
Task-03 submitted by pp.CommonPoolTest.main()
Task-03 started by ForkJoinPool.commonPool-worker-15
Task-04 started by ForkJoinPool.commonPool-worker-14
Task-04 submitted by pp.CommonPoolTest.main()
Task-05 submitted by pp.CommonPoolTest.main()
Task-05 started by ForkJoinPool.commonPool-worker-13
Task-06 started by ForkJoinPool.commonPool-worker-12
Task-06 submitted by pp.CommonPoolTest.main()
Task-07 submitted by pp.CommonPoolTest.main()
Task-08 submitted by pp.CommonPoolTest.main()
Task-07 started by ForkJoinPool.commonPool-worker-11
Task-09 started by ForkJoinPool.commonPool-worker-9
Task-09 submitted by pp.CommonPoolTest.main()
Task-08 started by ForkJoinPool.commonPool-worker-10
Task-10 started by ForkJoinPool.commonPool-worker-20
Task-10 submitted by pp.CommonPoolTest.main()
Task-01 finished by ForkJoinPool.commonPool-worker-18
Task-02 finished by ForkJoinPool.commonPool-worker-16
Task-01 delivered by pp.CommonPoolTest.main()
Task-02 delivered by pp.CommonPoolTest.main()
Task-06 finished by ForkJoinPool.commonPool-worker-12
Task-03 finished by ForkJoinPool.commonPool-worker-15
Task-05 finished by ForkJoinPool.commonPool-worker-13
Task-04 finished by ForkJoinPool.commonPool-worker-14
Task-09 finished by ForkJoinPool.commonPool-worker-9
Task-08 finished by ForkJoinPool.commonPool-worker-10
Task-07 finished by ForkJoinPool.commonPool-worker-11
Task-03 delivered by pp.CommonPoolTest.main()
Task-04 delivered by pp.CommonPoolTest.main()
Task-05 delivered by pp.CommonPoolTest.main()
Task-06 delivered by pp.CommonPoolTest.main()
Task-07 delivered by pp.CommonPoolTest.main()
Task-08 delivered by pp.CommonPoolTest.main()
Task-09 delivered by pp.CommonPoolTest.main()
Task-10 finished by ForkJoinPool.commonPool-worker-20
Task-10 delivered by pp.CommonPoolTest.main()

Es gibt für alle Tasks genug Threads im commonPool: Jeder der 10 Tasks, die vom main-Thread erzeugt und beim commonPool “submitted” werden, wird in einem anderen Thread ausgeführt. Die Bearbeitung der Tasks beginnt kurz nach dem Beginn des “Submittens” und ist überlappend.

Parallelität 3

# Cores: 18
# Threads: 3
Task-01 submitted by pp.CommonPoolTest.main()
Task-01 started by ForkJoinPool.commonPool-worker-3
Task-02 started by ForkJoinPool.commonPool-worker-2
Task-02 submitted by pp.CommonPoolTest.main()
Task-03 submitted by pp.CommonPoolTest.main()
Task-03 started by ForkJoinPool.commonPool-worker-1
Task-04 submitted by pp.CommonPoolTest.main()
Task-05 submitted by pp.CommonPoolTest.main()
Task-06 submitted by pp.CommonPoolTest.main()
Task-07 submitted by pp.CommonPoolTest.main()
Task-08 submitted by pp.CommonPoolTest.main()
Task-09 submitted by pp.CommonPoolTest.main()
Task-10 submitted by pp.CommonPoolTest.main()
Task-01 finished by ForkJoinPool.commonPool-worker-3
Task-03 finished by ForkJoinPool.commonPool-worker-1
Task-02 finished by ForkJoinPool.commonPool-worker-2
Task-05 started by ForkJoinPool.commonPool-worker-1
Task-01 delivered by pp.CommonPoolTest.main()
Task-04 started by ForkJoinPool.commonPool-worker-3
Task-02 delivered by pp.CommonPoolTest.main()
Task-03 delivered by pp.CommonPoolTest.main()
Task-06 started by ForkJoinPool.commonPool-worker-2
Task-05 finished by ForkJoinPool.commonPool-worker-1
Task-07 started by ForkJoinPool.commonPool-worker-1
Task-04 finished by ForkJoinPool.commonPool-worker-3
Task-08 started by ForkJoinPool.commonPool-worker-3
Task-04 delivered by pp.CommonPoolTest.main()
Task-06 finished by ForkJoinPool.commonPool-worker-2
Task-09 started by ForkJoinPool.commonPool-worker-2
Task-05 delivered by pp.CommonPoolTest.main()
Task-06 delivered by pp.CommonPoolTest.main()
Task-07 finished by ForkJoinPool.commonPool-worker-1
Task-10 started by ForkJoinPool.commonPool-worker-1
Task-07 delivered by pp.CommonPoolTest.main()
Task-08 finished by ForkJoinPool.commonPool-worker-3
Task-08 delivered by pp.CommonPoolTest.main()
Task-09 finished by ForkJoinPool.commonPool-worker-2
Task-09 delivered by pp.CommonPoolTest.main()
Task-10 finished by ForkJoinPool.commonPool-worker-1
Task-10 delivered by pp.CommonPoolTest.main()

Auch hier ist das Abarbeiten der Tasks überlappend mit dem “Submitten” den main-Threads. Jedoch werden nur die drei Threads mit den Namen ForkJoinPool.commonPool-worker-3, ForkJoinPool.commonPool-worker-5 und ForkJoinPool.commonPool-worker-7 verwendet. Während diese Threads in Benutzung sind, ruht die Abarbeitung der anderen bereits “submitten” Tasks. Es handelt sich in Bezug auf dieses Verhalten also um einen Fixed Thread Pool mit der Parallelität 3.

Parallelität 1

# Cores: 18
# Threads: 1
Task-01 submitted by pp.CommonPoolTest.main()
Task-02 submitted by pp.CommonPoolTest.main()
Task-03 submitted by pp.CommonPoolTest.main()
Task-01 started by ForkJoinPool.commonPool-worker-1
Task-04 submitted by pp.CommonPoolTest.main()
Task-05 submitted by pp.CommonPoolTest.main()
Task-06 submitted by pp.CommonPoolTest.main()
Task-07 submitted by pp.CommonPoolTest.main()
Task-08 submitted by pp.CommonPoolTest.main()
Task-09 submitted by pp.CommonPoolTest.main()
Task-10 submitted by pp.CommonPoolTest.main()
Task-01 finished by ForkJoinPool.commonPool-worker-1
Task-02 started by ForkJoinPool.commonPool-worker-1
Task-01 delivered by pp.CommonPoolTest.main()
Task-02 finished by ForkJoinPool.commonPool-worker-1
Task-03 started by ForkJoinPool.commonPool-worker-1
Task-02 delivered by pp.CommonPoolTest.main()
Task-03 finished by ForkJoinPool.commonPool-worker-1
Task-04 started by ForkJoinPool.commonPool-worker-1
Task-03 delivered by pp.CommonPoolTest.main()
Task-04 finished by ForkJoinPool.commonPool-worker-1
Task-05 started by ForkJoinPool.commonPool-worker-1
Task-04 delivered by pp.CommonPoolTest.main()
Task-05 finished by ForkJoinPool.commonPool-worker-1
Task-06 started by ForkJoinPool.commonPool-worker-1
Task-05 delivered by pp.CommonPoolTest.main()
Task-06 finished by ForkJoinPool.commonPool-worker-1
Task-07 started by ForkJoinPool.commonPool-worker-1
Task-06 delivered by pp.CommonPoolTest.main()
Task-07 finished by ForkJoinPool.commonPool-worker-1
Task-08 started by ForkJoinPool.commonPool-worker-1
Task-07 delivered by pp.CommonPoolTest.main()
Task-08 finished by ForkJoinPool.commonPool-worker-1
Task-09 started by ForkJoinPool.commonPool-worker-1
Task-08 delivered by pp.CommonPoolTest.main()
Task-09 finished by ForkJoinPool.commonPool-worker-1
Task-10 started by ForkJoinPool.commonPool-worker-1
Task-09 delivered by pp.CommonPoolTest.main()
Task-10 finished by ForkJoinPool.commonPool-worker-1
Task-10 delivered by pp.CommonPoolTest.main()

Zum Abarbeiten der Tasks steht nur ein Thread im commonPool zur Verfügung (ForkJoinPool.commonPool-worker-3). Die Tasks werden in der Reihenfolge des “Submittens” einer nach dem anderen in diesem Thread abgearbeitet.

Parallelität 0

# Cores: 18
# Threads: 1
Task-01 submitted by pp.CommonPoolTest.main()
Task-02 submitted by pp.CommonPoolTest.main()
Task-03 submitted by pp.CommonPoolTest.main()
Task-04 submitted by pp.CommonPoolTest.main()
Task-05 submitted by pp.CommonPoolTest.main()
Task-06 submitted by pp.CommonPoolTest.main()
Task-07 submitted by pp.CommonPoolTest.main()
Task-08 submitted by pp.CommonPoolTest.main()
Task-09 submitted by pp.CommonPoolTest.main()
Task-10 submitted by pp.CommonPoolTest.main()
^C

Das Programm terminiert nicht. Entgegen der Ausgabe Threads: 1 scheint der commonPool mit der Parallelität 0 tatsächlich keine Threads zur Verarbeitung der Tasks zu besitzen. Demzufolge werden die Tasks auch nie begonnen oder beendet. Das Programm hängt folglich im main Thread am ersten Aufruf von get fest.

Laboraufgabe “Monte-Carlo-Algorithmus zur Annäherung von \(\pi\) – mit CompletableFuture”

Es gibt zwei Lösungsvarianten: In MonteCarloPiCF1 ist die Lösung als eine lange und explizite Verkettung/Verschachtelung von supplyAsync- und thenCombineAsync-Aufrufen realisiert:

var now = System.currentTimeMillis();
var result = CompletableFuture.supplyAsync(
    () -> getResultMonteCarloPiDraw(TOTAL_CYCLES / PARALLELISM))
    .thenCombineAsync(
        CompletableFuture
            .supplyAsync(() -> getResultMonteCarloPiDraw(
                TOTAL_CYCLES / PARALLELISM)),
        (InOutTuple x, InOutTuple y) -> x.add(y))
    .thenCombineAsync(
        CompletableFuture
            .supplyAsync(() -> getResultMonteCarloPiDraw(
                TOTAL_CYCLES / PARALLELISM)),
        (InOutTuple x, InOutTuple y) -> x.add(y))
    .thenCombineAsync(
        CompletableFuture
            .supplyAsync(() -> getResultMonteCarloPiDraw(
                TOTAL_CYCLES / PARALLELISM)),
        (InOutTuple x, InOutTuple y) -> x.add(y))
    .thenCombineAsync(
        CompletableFuture
            .supplyAsync(() -> getResultMonteCarloPiDraw(
                TOTAL_CYCLES / PARALLELISM)),
        (InOutTuple x, InOutTuple y) -> x.add(y))
    .thenCombineAsync(
        CompletableFuture
            .supplyAsync(() -> getResultMonteCarloPiDraw(
                TOTAL_CYCLES / PARALLELISM)),
        (InOutTuple x, InOutTuple y) -> x.add(y))
    .thenCombineAsync(
        CompletableFuture
            .supplyAsync(() -> getResultMonteCarloPiDraw(
                TOTAL_CYCLES / PARALLELISM)),
        (InOutTuple x, InOutTuple y) -> x.add(y))
    .join();
var time = System.currentTimeMillis() - now;
var pi = ((4.0 * result.in()) / (result.in() + result.out()));
System.out.println(pi + ", " + time + " ms");

Der Nachteil ist, dass eine Erweiterung/Änderung unübersichtlich wird. Bei MonteCarloPiCF2 wird hingegen wieder eine Konstante namens PARALLELISM genutzt, um die Anzahl der zu berechnenden Teile anzugeben (die Parallelität im engeren Sinne ist allerdings durch die Größe des commonPool vorgeben):

var now = System.currentTimeMillis();
var resultStage = CompletableFuture.supplyAsync(
    () -> getResultMonteCarloPiDraw(TOTAL_CYCLES / PARALLELISM));
for (var i = 1; i < PARALLELISM; i++) {
    resultStage = resultStage.thenCombineAsync(
        CompletableFuture
            .supplyAsync(() -> getResultMonteCarloPiDraw(
                TOTAL_CYCLES / PARALLELISM)),
        (InOutTuple x, InOutTuple y) -> x.add(y));
}
var result = resultStage.join();
var time = System.currentTimeMillis() - now;
var pi = ((4.0 * result.in()) / (result.in() + result.out()));
System.out.println(pi + ", " + time + " ms");