AWS S3 with Java - Reactive: S3 upload doesn't work - spring

I have a Spring Batch that should get a gzipped file from a rest service and then save it in a S3 bucket in the first step. In the second step it should retrieve this file from S3 and elaborate it. I have tried to follow this guide ("Single File Upload" case).
I'm able to save the file in S3, but I get "NoSuchKeyException: The specified key does not exist." at the beginning of the second step.
Looking at S3 bucket, the file has been written just 2 seconds after the second step started. It seems that the first step said "file created" even if its process was not completed. Infact this log:
log.info("S3 saving response: {}", response.sdkHttpResponse().isSuccessful());
is not present at all in the logs.
Thanks in advance!
The tasklet of the first step (simplified):
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
...
Mono<ResponseEntity<Flux<DataBuffer>>> response = myService.getAnalyticsDataBuffer2(authToken);
response.log().map(e -> {
DataBufferMapper dataBufferMapper = new DataBufferMapper(myFileManager, analyticTripFilename);
return dataBufferMapper.apply(e);
}).log().block();
return RepeatStatus.FINISHED;
}
The method to retrieve the gzipped file:
public Mono<ResponseEntity<Flux<DataBuffer>>> getAnalyticsDataBuffer2(String correlationId, String token) {
return webClient
.get()
.uri(
uriBuilder ->
uriBuilder.path(ANALITYCS)
.build()
)
.headers(h -> h.setBearerAuth(token))
.header("Business-User-Id", clientId)
.header("Connection", "close")
.accept(MediaType.APPLICATION_OCTET_STREAM)
.retrieve()
.toEntityFlux(DataBuffer.class)
.name(METRICS_NAME)
.tag(TAG_API_NAME, API_NAME)
.metrics();
}
The mapper Functional interface:
public class DataBufferMapper implements Function<ResponseEntity<Flux<DataBuffer>>, Mono> {
private final MyFileManager myFileManager;
private final String analyticTripFilename;
public DataBufferMapper(MyFileManager myFileManager, String analyticTripFilename) {
this.myFileManager = myFileManager;
this.analyticTripFilename = analyticTripFilename;
}
#Override
public Mono apply(ResponseEntity<Flux<DataBuffer>> response) {
return myFileManager.saveFile(analyticTripFilename, response.getBody(), response.getHeaders().getContentLength());
}
}
The implementation of the S3 upload.
public class S3FileManager implements MyFileManager {
#Autowired
private S3AsyncClient s3AsyncClient;
#Autowired
private S3ClientConfig s3ClientConfig;
#Override
public Mono saveFile(String fileName, Flux<DataBuffer> body, long contentLength) {
log.info("S3FileManager saving file/content length{}/{}", fileName, contentLength);
Flux<java.nio.ByteBuffer> buffers = body.map(DataBuffer::asByteBuffer);
CompletableFuture<PutObjectResponse> future = s3AsyncClient. putObject(PutObjectRequest.builder()
.bucket(s3ClientConfig.getBucket())
.contentLength(contentLength)
.key(fileName)
.contentType(MediaType.APPLICATION_OCTET_STREAM.toString())
.build(), AsyncRequestBody.fromPublisher(buffers));
return Mono.fromFuture(future)
.map(response -> {
checkResult(response);
log.info("S3 saving response: {}", response.sdkHttpResponse().isSuccessful());
return response;
});
}
Update
To isolate the problem, I have added reactor logs and added a 'local' implementation for MyFileManager like this:
#Override
public Mono saveFile(String fileName, Flux<DataBuffer> dataBuffer, long contentLength) {
String filePath = localFileManagerPath + fileName;
final Path path = FileSystems.getDefault().getPath(filePath);
log.info("LocalFileManager saving {}", filePath);
return DataBufferUtils.write(dataBuffer, path, CREATE_NEW).log();
}
Now the error is clearly turned into 'Input resource must exist' and the log says:
[ INFO] demo.service.batchservice.BatchService: Starting the runAnalyticTripsJob job
[ INFO] org.springframework.batch.core.launch.support.SimpleJobLauncher: Job: [SimpleJob: [name=analyticTripsLoaderJob]] launched with the following parameters: [{analyticTripFilename=ANALYTIC_TRIPS-30052022-175840.858.csv, time=T15:58:40.858902Z, isNewAnalyticTripNeeded=true}]
[ INFO] org.springframework.batch.core.job.SimpleStepHandler: Executing step: [retrievingDataStep]
[ INFO] demo.service.demo.DemoDataRetriever: analyticTripFilename = ANALYTIC_TRIPS-30052022-175840.858.csv, isNewAnalyticTripNeeded = true
[ INFO] demo.client.DemoAuthClient: Successfully accrued demo token
[ INFO] reactor.Mono.OnAssembly.1: | onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber)
[ INFO] reactor.Mono.OnAssembly.2: | onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber)
[ INFO] reactor.Mono.OnAssembly.3: | onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber)
[ INFO] reactor.Mono.OnAssembly.3: | request(unbounded)
[ INFO] reactor.Mono.OnAssembly.2: | request(unbounded)
[ INFO] reactor.Mono.OnAssembly.1: | request(unbounded)
[ INFO] reactor.Mono.OnAssembly.1: | onNext(<200,Flux.onErrorResume ⇢ at org.springframework.web.reactive.function.client.DefaultWebClient$DefaultResponseSpec.handlerEntityFlux(DefaultWebClient.java:635),[access-control-allow-origin:"*", x-request-id:"xyz", content-disposition:"attachment; filename="data.csv.gz"", ... function-execution-id:"j", x-cloud-trace-context:"xxx;o=1", Connection:"close", Content-Length:"68", Content-Type:"application/octet-stream", Via:"1.1 google"]>)
[ INFO] reactor.Mono.OnAssembly.2: | onNext(<200,Flux.onErrorResume ⇢ at org.springframework.web.reactive.function.client.DefaultWebClient$DefaultResponseSpec.handlerEntityFlux(DefaultWebClient.java:635),[access-control-allow-origin:"*", x-request-id:"xyz", content-disposition:"attachment; filename="data.csv.gz"", ... function-execution-id:"j", x-cloud-trace-context:"xxx;o=1", Connection:"close", Content-Length:"68", Content-Type:"application/octet-stream", Via:"1.1 google"]>)
[ INFO] demo.service.demo.LocalFileManager: LocalFileManager saving ./tmp/ANALYTIC_TRIPS-30052022-175840.858.csv
[ INFO] reactor.Mono.OnAssembly.3: | onNext(Mono.log ⇢ at demo.service.demo.DemoDataRetriever.lambda$execute$0(DemoDataRetriever.java:45))
[ INFO] demo.service.demo.DemoDataRetriever: LocalFileManager saved ANALYTIC_TRIPS-30052022-175840.858.csv
[ INFO] reactor.Mono.OnAssembly.2: | onComplete()
[ INFO] reactor.Mono.OnAssembly.3: | onComplete()
[ INFO] reactor.Mono.OnAssembly.1: | onComplete()
[ INFO] org.springframework.batch.core.step.AbstractStep: Step: [retrievingDataStep] executed in 2s257ms
[ INFO] org.springframework.batch.core.job.SimpleStepHandler: Executing step: [processingDataStep]
[ INFO] demo.config.BatchConfiguration: analyticTripFilename = ANALYTIC_TRIPS-30052022-175840.858.csv
[ INFO] demo.service.demo.LocalFileManager: LocalFileManager getFileResource ./tmp/ANALYTIC_TRIPS-30052022-175840.858.csv
[ERROR] org.springframework.batch.core.step.AbstractStep: Encountered an error executing step processingDataStep in job analyticTripsLoaderJob
org.springframework.batch.item.ItemStreamException: Failed to initialize the reader
Thanks in advance!

Finally solved...it was just a matter of turning map into flatMap in the tasklet:
Mono<ResponseEntity<Flux<DataBuffer>>> response = myService.getAnalyticsDataBuffer2(authToken);
response.flatMap(e -> {
DataBufferMapper dataBufferMapper = new DataBufferMapper(myFileManager, analyticTripFilename);
return dataBufferMapper.apply(e);
}).block();
Now it works both locally and on S3.

Related

Spring batch keeps writing record after exception

I have a spring batch job where I have to check if the id is equal in all the file lines and should skip the lines that contains a different id . What I did is save the first record and then compare the id of each line, if the id is different then throw a Runtime exception, but for some reason spring batch works until it gets the line "to be excluded" and then repeats the writing process by writing all the records on exception .
here's what i mean :
sms [ Id_Campaign=5598661]
sms [ Id_Campaign=5598661]
sms [ Id_Campaign=5598661]
sms [ Id_Campaign=5598661]
sms [ Id_Campaign=5598661]
2021-06-03 11:12:28.466 ERROR 41416 --- [ scheduling-1] tn.itserv.batch.SkipLinesListener : An error occured while writing the input Force rollback on skippable exception so that skipped item can be located.
sms [ Id_Campaign=5598661]
sms [ Id_Campaign=5598661]
sms [ Id_Campaign=5598661]
sms [ Id_Campaign=5598661]
sms [ Id_Campaign=5598661]
sms [ Id_Campaign=7798661]
My step:
#Bean
public Step loadFiles() throws IOException {
return stepBuilderFactory
.get("step1")
.<FileModelIn, FileModelOut>chunk(100)
.reader(multiResourceItemReader())
.processor(batchProcessor())
.writer(batchWriter())
.faultTolerant()
.skipPolicy(skipLinesListener())
.noRetry(RuntimeException.class)
.noRollback(RuntimeException.class)
.listener(MyStepListner())
.build();
}
SkipPolicy:
public class SkipLinesListener implements SkipPolicy {
private static final int MAX_SKIP_COUNT = 10;
private static final Logger logger = LoggerFactory.getLogger(SkipLinesListener.class);
#Override
public boolean shouldSkip(Throwable t, int skipCount) throws SkipLimitExceededException {
if (t instanceof RuntimeException && skipCount < MAX_SKIP_COUNT )
{ RuntimeException ex=(RuntimeException)t;
logger.error("An error occured while writing the input "+ ex.getMessage());
return true;
}
if (t instanceof FlatFileParseException && skipCount < MAX_SKIP_COUNT ) {
FlatFileParseException ex = (FlatFileParseException) t;
logger.error("An error occured while processing the "+ ex.getInput());
return true;
}
return false;
}
}
I don't know why am getting this behaviour, am I missing something?
am throwing the exception in the itemwriter class
#Override
public void write(List<? extends FileModelOut> items) throws Exception {
List<FCCampaignModel> campaigns=new ArrayList<FCCampaignModel>();
List<sms> smsList=new ArrayList<>();
FCCampaignModel firstLine=cmsDaoProxy.addCampaign(items.get(0).getFcCampaignModel());
for (FileModelOut fileContent : items) {
if (fileContent.getFcCampaignModel().getId_Campaign().equals(firstLine.getId_Campaign()))
{
smsRepository.save(fileContent.getSms());
}
else throw new RuntimeException("different id campaign detected : "+fileContent.getFcCampaignModel().getId_Campaign());
}
Your exception is declared as a skippable exception, so when it is thrown from the item writer, Spring Batch will scan the chunk item by item, ie re-process items one by one, each one in its own transaction.
This is because items are written in chunks (ie in bulk mode), and if an exception occurs during that bulk-write operation, Spring Batch cannot know which item caused the issue, so it will retry them one by one. You can find an example in the samples module: Chunk Scanning Sample.

hapi-fhir-cli upload examples command giving an error

I'm trying to upload test data to the local JPA fhir server using hapi-fhir-cli. But while uploading the resources, I'm getting the following error.
2020-09-03 17:33:26.486 [main] INFO c.u.f.c.ExampleDataUploader 1 good references
2020-09-03 17:33:26.511 [main] INFO c.u.f.c.ExampleDataUploader Final bundle: 18 entries
2020-09-03 17:33:26.527 [main] INFO c.u.f.c.ExampleDataUploader About to upload 11 examples in a transaction, 2 remaining
2020-09-03 17:33:26.637 [main] INFO c.u.f.c.ExampleDataUploader Final bundle: 62 KB
2020-09-03 17:33:26.641 [main] INFO c.u.f.c.ExampleDataUploader Uploading bundle to server: http://127.0.0.1:8080/hapi-fhir-jpaserver/fhir
2020-09-03 17:33:26.960 [main] ERROR c.u.f.c.ExampleDataUploader Failed to upload bundle:HTTP 0: Failed to retrieve the server metadata statement during client initialization. URL used was http://127.0.0.1:8080/hapi-fhir-jpaserver/fhir/metadata
Even if I replace http://127.0.0.1:8080/hapi-fhir-jpaserver/fhir/metadata by public hapi fhir test server, i.e. http://hapi.fhir.org/baseR4, I'm getting the same error. I'm getting the above error after running the following hapi-fhir-cli command.
hapi-fhir-5.1.0-cli>hapi-fhir-cli upload-examples -t http://127.0.0.1:8080/hapi-fhir-jpaserver/fhir -v dstu2 -l 40
If I change the version to dstu3 or r4, I get the validation error, i.e. bundle type=transaction not found in valueset defined at hl7 website, even if it's defined.
Does anyone have any idea about both of these errors? Any help would be appreciated. Thanks.
Can you show where you are creating your client code (please).
But the two suggestions I have:
Are you setting the FhirContext to the right version? Do you need a bearer token?
//import ca.uhn.fhir.context.FhirContext;
private FhirContext getContext() {
return FhirContext.forR4();
}
Note, creating the context (the call to "forR4" is expensive, so you want to minimize the number of times you call that).
//// import ca.uhn.fhir.rest.client.api.IGenericClient;
private IGenericClient generateIGenericClient(FhirContext fhirContext, GenericClientCreateArgs createArgs) {
IGenericClient client = fhirContext.newRestfulGenericClient(createArgs.getServerBase());
if (null != createArgs && createArgs.getBearerToken().isPresent()) {
String token = createArgs.getBearerToken().get();
if (StringUtils.isNotBlank(token)) {
BearerTokenAuthInterceptor authInterceptor = new BearerTokenAuthInterceptor(token);
client.registerInterceptor(authInterceptor);
}
}
return client;
}
and my "args" holder class:
import java.util.Optional;
public final class GenericClientCreateArgs {
private String serverBase;
private Optional<String> bearerToken;
public String getServerBase() {
return serverBase;
}
public void setServerBase(String serverBase) {
this.serverBase = serverBase;
}
public Optional<String> getBearerToken() {
return bearerToken;
}
public void setBearerToken(Optional<String> bearerToken) {
this.bearerToken = bearerToken;
}
}

How to use testRestTemplate for a post request with no request body

I have a rest endpoint as below:
#PostMapping(value = "/customers/{customerId}")
public SomeResponse manageCustomers(#PathVariable String customerId){
...
}
This endpoint picks customer data from one system for the given customerId and saves it in another system. Thus, it doesn't require any request body.
I need to write an integration test for this. When I use testRestTemplate for this, I can't find a good enough method where I can pass requestEntity as null. Whenever I do that, I get an exception saying 'uriTemplate must not be null'.
I have tried to use 'postForObject', 'exchange' methods but doesn't work. Any ideas?
Below is my IT:
#SpringBootTest(webEnvironmentSpringBootTest.WebEnvironment.RANDOM_PORT)
#DirtiesContext
#ActiveProfiles("test")
class CustomerIT extends Specification{
#LocalServerPort
private int port;
#Autowired
private TestRestTemplate restTemplate
def "should get customer from first system and save in second system"() {
given:
def customerUrl = new URI("http://localhost:" + port + "/customers/1234")
def expected = new SomeObject(1)
when:
def someObject =
restTemplate.postForEntity(customerUrl, null, SomeObject.class)
then:
someObject != null
someObject == expected
}
}
Using postForEntity(url, null, ResponseType.class) works for me.
My postmapping is equal to your except for the responseType. I used Map just as an example
#PostMapping(value = "/customers/{customerId}")
public Map<String, String> manageCustomers(#PathVariable String customerId){
return new HashMap<String, String>(){{ put("customerId", customerId); }};
}
Test to verify that it works
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class CustomerControllerTest {
#LocalServerPort
private int port;
private final TestRestTemplate testRestTemplate = new TestRestTemplate();
#Test
public void postEmptyBodyShouldReturn200OK() {
String customerId = "123";
ResponseEntity responseEntity = testRestTemplate
.postForEntity(format("http://localhost:%s/ping/customers/%s", port, customerId), null, Map.class);
assertThat(responseEntity.getStatusCode()).isEqualTo(HttpStatus.OK);
assertThat(responseEntity.getHeaders().getContentType()).isEqualTo(MediaType.APPLICATION_JSON_UTF8);
assertThat(responseEntity.getBody()).isNotNull();
assertThat(((LinkedHashMap) responseEntity.getBody()).size()).isEqualTo(1);
assertThat(((LinkedHashMap) responseEntity.getBody()).get("customerId")).isEqualTo(customerId);
}
}
Running this test in maven
$ mvn -Dtest=CustomerControllerTest test
(...removed unnecessary output...)
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.137 s - in com.ins.example.demo.rest.CustomerControllerTest
10:04:54.683 [Thread-3] INFO o.s.s.c.ThreadPoolTaskExecutor - Shutting down ExecutorService 'applicationTaskExecutor'
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.940 s
[INFO] Finished at: 2019-04-07T10:04:55+02:00
[INFO] ------------------------------------------------------------------------

Spring Boot elasticsearch produce warnings: Transport response handler not found of id

I setuped a web app with org.springframework.boot:spring-boot-starter-data-elasticsearch. Everything work well - I can populate indexes with my stand alone Elasticsearch 5. But I continue to receive some weird warnings:
2018-05-08 03:07:57.940 WARN 32053 --- [ient_boss][T#7]] o.e.transport.TransportService : Transport response handler not found of id [5]
2018-05-08 03:08:02.949 WARN 32053 --- [ient_boss][T#8]] o.e.transport.TransportService : Transport response handler not found of id [7]
2018-05-08 03:08:07.958 WARN 32053 --- [ient_boss][T#1]] o.e.transport.TransportService : Transport response handler not found of id [9]
2018-05-08 03:08:12.970 WARN 32053 --- [ient_boss][T#2]] o.e.transport.TransportService : Transport response handler not found of id [11]
...
Simple app to reproduce:
#SpringBootApplication
public class App {
#Configuration
#EnableElasticsearchRepositories(basePackages = "com.test")
public class EsConfig {
#Value("${elasticsearch.host}")
private String esHost;
#Value("${elasticsearch.port}")
private int esPort;
#Value("${elasticsearch.clustername}")
private String esClusterName;
#Bean
public TransportClient client() throws Exception {
Settings esSettings = Settings.builder().put("cluster.name", esClusterName).build();
InetSocketTransportAddress socketAddress = new InetSocketTransportAddress(
InetAddress.getByName(esHost), esPort);
return new PreBuiltTransportClient(esSettings).addTransportAddress(socketAddress);
}
#Bean
public ElasticsearchOperations elasticsearchTemplate(Client client) throws Exception {
return new ElasticsearchTemplate(client);
}
}
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
}
My compose file for ES
version: "2.3"
services:
elasticsearch:
image: 'docker.elastic.co/elasticsearch/elasticsearch:5.6.8
ports:
- "9200:9200"
- "9300:9300"
environment:
- xpack.security.enabled=false
- ES_JAVA_OPTS=-Xms700m -Xmx700m
- PATH_LOGS="/tmp/el-log"
- cluster.name=dou
cpu_shares: 1024
mem_limit: 1024MB
As a project transitive dependency I have org.elasticsearch.client:transport:5.6.8. Looks like versions of ES instance & library are the same.
So, what does this warning mean & how should we deal with it?

Remove file from remote using streaming inbound channel adapter spring boot implementation

I am trying to remove file from remote by implementing streaming inbound but connection is closing before adviceChain implementing.
CODE:
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpHost);
factory.setPort(sftpPort);
factory.setUser(sftpUser);
factory.setPassword(sftpPwd);
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<LsEntry>(factory);
}
#Bean
#InboundChannelAdapter(channel = "stream", poller = #Poller(cron = "2 * * * * ?"))
public MessageSource<InputStream> sftpMessageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(template());
messageSource.setRemoteDirectory(remoteDirecotry);
messageSource.setFilter(new AcceptAllFileListFilter<>());
return messageSource;
}
#Bean
public SftpRemoteFileTemplate template() {
return new SftpRemoteFileTemplate(sftpSessionFactory());
}
#Bean
#Transformer(inputChannel = "stream", outputChannel = "data")
public org.springframework.integration.transformer.Transformer transformer() {
return new StreamTransformer("UTF-8");
}
#ServiceActivator(inputChannel = "data" ,adviceChain = "afterChain")
#Bean
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
String fileName = message.getHeaders().get("file_remoteFile").toString();
if (!StringUtils.isEmpty(message.toString())) {
else{
log.info("No file found in the Remote location");
}
}
};
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice afterChain() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpression(
"#template.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
//advice.setOnSuccessExpressionString("#template.remove(headers['file_remoteFile'])");
advice.setPropagateEvaluationFailures(true);
return advice;
}
wherever i search every one is suggesting to implement ExpressionEvaluatingRequestHandlerAdvice but it is throwing me below error.
2018-03-27 12:32:02.618 INFO 23216 --- [ask-scheduler-1] o.s.b.c.l.support.SimpleJobLauncher : Job: [FlowJob: [name=starsBatchJob]] completed with the following parameters: [{JobID=1522168322277}] and the following status: [COMPLETED]
2018-03-27 12:32:02.618 INFO 23216 --- [ask-scheduler-1] c.f.u.config.ParentBatchConfiguration : Job Status Completed
2018-03-27 12:32:02.618 INFO 23216 --- [ask-scheduler-1] c.f.u.config.ParentBatchConfiguration : Total time tokk for Stars Batch execution: 0 seconds.
2018-03-27 12:32:02.618 INFO 23216 --- [ask-scheduler-1] c.f.u.config.ParentBatchConfiguration : Batch Job lock is released
2018-03-27 12:32:02.633 INFO 23216 --- [ask-scheduler-1] com.jcraft.jsch : Disconnecting from hpchd1e.hpc.ford.com port 22
2018-03-27 12:32:02.633 ERROR 23216 --- [ask-scheduler-1] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Dispatcher failed to deliver Message; nested exception is org.springframework.messaging.MessagingException: Failed to execute on session; nested exception is org.springframework.core.NestedIOException: Failed to remove file: 2: No such file; nested exception is 2
I had this problem. My path to the remote file was incorrect. I needed a trailing /. It is a little difficult to see since the path is being created inside a Spel Expression. You can see the path using the following in the handleMessage() method.
String remoteDirectory = (String) message.getHeaders().get("file_remoteDirectory");
String remoteFile = (String) message.getHeaders().get("file_remoteFile");
I did have to use the advice.setOnSuccessExpressionString("#template.remove(headers['file_remoteFile'])"); that is commented out above instead of advice.setOnSuccessExpression"#template.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
It is incorrect in the documentation https://docs.spring.io/spring-integration/reference/html/sftp.html#sftp-streaming which is why I believe people who struggle with this lose faith in the doc. But this seems to be the only error.

Resources