XStream not work java 17 + Axon 4.5.15 + Kafka 4.5.4 + mongodb 4.5

Hello,

I am facing trying migrate my aplication to java 17 and Axon,
I get the follow error:

com.thoughtworks.xstream.converters.ConversionException: No converter available
---- Debugging information ----
message : No converter available
type : java.util.Collections$UnmodifiableMap
converter : com.thoughtworks.xstream.converters.reflection.ReflectionConverter
message[1] : Unable to make field private static final long java.util.Collections$UnmodifiableMap.serialVersionUID accessible: module java.base does not “opens java.util” to unnamed module @4b5a5ed1
class : org.axonframework.extensions.kafka.eventhandling.consumer.streamable.KafkaTrackingToken
required-type : org.axonframework.extensions.kafka.eventhandling.consumer.streamable.KafkaTrackingToken
converter-type : com.thoughtworks.xstream.converters.reflection.ReflectionConverter
path : /org.axonframework.extensions.kafka.eventhandling.consumer.streamable.KafkaTrackingToken/positions
line number : 2
version : 1.4.19

And my configuration as follow:

@Bean
fun xStream(): XStream {
val xStream = XStream()
xStream.addPermission(AnyTypePermission.ANY)
xStream.allowTypesByWildcard(arrayOf(“br.com.gubee.", "java.util.”))
return xStream
}

@Bean
fun tokenStore(
        storageTemplate: MongoTemplate,
        xStream: XStream
): TokenStore {
    return MongoTokenStore.builder()
            .mongoTemplate(storageTemplate)
            .serializer(XStreamSerializer.builder()
                    .xStream(xStream)
                    .disableAxonTypeSecurity().build())
            .build()
}

How to solve this kind of issue?

You need a wildcard for the classes to allow, so something like:

   @Bean
    public XStream xStream() {
        XStream xStream = new XStream();

        xStream.allowTypesByWildcard(new String[]{
                "java.util.**",
                "tech.gklijs.api.**"
        });
        return xStream;
    }

Hello Gerard,

I did exactly as you show, but when starting the application the error remains


 @Bean
    fun xStream(): XStream {
        val xStream = XStream()
        xStream.addPermission(AnyTypePermission.ANY)
        xStream.allowTypesByWildcard(arrayOf("br.com.gubee.**",  "java.util.**"))
        return xStream
    }

It might be Java 17 related. Since we are probably moving to Jackson as default, moving to Jackson would probably be the easiest way to work around it.

I tried, but I get parser error because it was already saved in xml format in the database, how could this migration from one format to another be done?

I was under the impression you were still in the development phase. You could use upcasters through for cases like this. Alternatively, stick to Xtream with Java 17, and you likely need tricks like this.

For the trackingtokens document, if I drop and recreate a new one from scratch is it a problem? Is there a problem with the kafka extension? I thought to define the auto-offset-reset to latest and drop document, alter XStream to jackson e restart.
What do you think about this?

It doesn’t matter what you set as auto-offset-reset currently. If you remove the token, it will start from the beginning.

Reading your initial question again, it seems the problem is different from what I thought at first, possibly we need to fix serialization of the token with some additional Jackson annotations.

If you use Java 11 it does work?

yes,
java, 11, 12… 15 works fine, but 17 not.
the error I think is related to the type of data structure used to save the token which is an immutable list and it tries to read the final fields.

Hi Renato, I discussed this with the team. As XStream is very much dependent on reflection, and that’s limited more and more as Java progresses, it will be hard, nearly impossible, to make the required changes.

So moving to Jackson is the only solution when using Java 17. There is not really a way to convert the stored tokens. But the structure of the token is quite simple. So I think if you really need to, you could have an app with two token stores, and get them from one, and store them via the other, and that should in theory work.

Hello,

I did this to migrate, and after restarting the app I remove the adapter to use only json so it saves again correctly in json:


class JsonAdapterSerializer(private val xStreamSerializer: XStreamSerializer,
                            private val delegateSerializer: Serializer) : Serializer {
    override fun <T : Any?> serialize(`object`: Any?, expectedRepresentation: Class<T>?): SerializedObject<T> {
        return delegateSerializer.serialize(`object`, expectedRepresentation)
    }

    override fun <T : Any?> canSerializeTo(expectedRepresentation: Class<T>?): Boolean {
        return delegateSerializer.canSerializeTo(expectedRepresentation)
    }

    override fun <S : Any?, T : Any?> deserialize(serializedObject: SerializedObject<S>?): T {
        return try {
            xStreamSerializer.deserialize(serializedObject)
        } catch (ex: Throwable) {
            log.error("jsonadapterserializer.parse xtream not allowed {}", ex.message, ex)
            delegateSerializer.deserialize(serializedObject)
        }
    }

    override fun classForType(type: SerializedType?): Class<*> {
        return delegateSerializer.classForType(type)
    }

    override fun typeForClass(type: Class<*>?): SerializedType {
        return delegateSerializer.typeForClass(type)
    }

    override fun getConverter(): Converter {
        return delegateSerializer.converter
    }

    companion object {
        val log = LoggerFactory.getLogger(JsonAdapterSerializer::class.java)
    }
}

I have the same Java 17 problem and used your JsonAdapterSerializer. It works quite well. Only thing I’m worried about is a special token, the “__config” Token of type ConfigToken. It is written in the JpaTokenStore once when the store is initialized and seems never to be updated.

The token seems to be some ID sent to the Axonserver by the SreamingProcessor. Can I savely remove it since it will directly be regenerated after next restart?

Yes Christian, this is part of an optimization. It will be replaced by another random UUID, which is fine. See also extension-code where it is used.