Category Archives: Coding

Jdbi vs jOOQ

What are these 4-letter non-sensical words?

Jdbi and jOOQ are both java libraries that help you access relational data.

Why not just construct a SQL query as a String and execute it via jdbc? Well, then your code would be vulnerable to sql injection by an attacker or more likely, you would have some bug that tries to check age > "blue" at some point. Basically, you need type-safety to ensure no other junk gets intentionally or accidentally injected into your query.

Here’s how they differ

Here’s a Jdbi example

@SqlQuery("SELECT * FROM employees WHERE department IN (<departments>)")
@RegisterRowMapper(JdbiEmployeeMapper.class)
List<Employee> getEmployeesByDepartment(@BindList("departments") List<String> departments);

Here’s a jOOQ example

DSLContext dslContext = ...
Employees EMPLOYEES_TABLE = Employees.EMPLOYEES;
List<String> departments = ...
dslContext.select().from(EMPLOYEES_TABLE).where(EMPLOYEES_TABLE.DEPARTMENT.in(departments).fetch(EmployeeMapper::map)

Both pretty concise, but I would argue that Jdbi wins in terms of readability. Also, I didn’t show it here, but the jOOQ Employees table above needs to be code generated. There’s just more set up involved with jOOQ.

So why the heck would anyone use jOOQ?

Here are a couple of reasons

  • You want a more complicated query with filtering of arbitrary fields
  • You want a more complicated query with sorting of arbitrary fields
  • You want to paginate your results

Arbitrary Filtering

Suppose the client wants to filter employees on either department or status. You might write another jdbi query filtering by status and choose one or the other. What if the client wants to filter by both department and status? It starts to get a bit more complicated as the number and combination of filter fields increases.

You could do this stackoverflow hack.

SELECT * FROM things
WHERE (:foo IS NULL OR foo_column = :foo)
AND (:bar IS NULL or bar_column = :bar)

I don’t think that’s the right approach though. Your query becomes unwieldy. We haven’t even considered performance here. Some irrelevant index (or worse, no index) might be used because it sees foo in the conditional even though it’s not used.

Since jOOQ lets you construct the query, it just more naturally fits the bill

dslContext.select().from(EMPLOYEES_TABLE).
    .where(buildConditional(query))
    .fetch(EmployeeMapper::map);
...
private Condition buildCondition(query) {
  Condition condition = noCondition();
  if (!query.departments().isEmpty()) {
    condition = condition.add(EMPLOYEES_TABLE.DEPARTMENTS.in(query.departments());
  }
  ...
  return condition;
}

Arbitrary Sorting

Can’t really be done easily in Jdbi. Jdbi lets you work with an external template engine. The jdbi stackoverflow link above has an example, but it’s really getting out of the ease and focus of jdbi in my opinion. Also, seems even with the jdbi 3rd-party templating approach, there’s a restriction on a static number of fields to sort on possibly.

Again, jOOQ to the rescue

List<SortField<?>> sortFields = ...
dslContext.select().from(EMPLOYEES_TABLE).orderBy(sortFields);

Pagination

Finally, if you don’t want to return your entire database table at once (probably a good idea), then you’ll need to paginate. In order to paginate, you need to have a deterministic sort order. Otherwise, there’s no telling what the 2nd page of randomly sorted results will hold.

Assuming you will only ever sort on exactly 1 field, you can do it with jdbi. Let’s say we always sort Employees on start_date in descending order and we want the 10th page of 100 results

select * from employees
order by start_date DESC
LIMIT 1000, 100

This is … just bad. It is slow because the database needs to scan through the 1st 10 pages of 100 results.

There’s a better way using seek. The idea is, what if you could just use the where clause to skip to the next record?

Suppose after the 10th page of 100 results, you get back an employee with a start date of Jan 1st, 2023. Well, maybe we can do something like this

select * from employees
where start_date > '2023-01-01'
order by start_date DESC
limit 100

This lets us leverage the indexes to quickly skip through all the previous records. Now, the astute reader will realize that there could have been multiple employees with that start date so we would have skipped them with this query. Let’s suppose the last employee we got back had an id of 4444. This is how we would fix that

select * from employees
order by start_date DESC
where (start_date > '2023-01-01') or (start_date = '2023-01-01' and employee_id > 4444)
limit 100

So you can see how this works logically. However, constructing this query in jdbi would be infeasible. Even with jooq’s declarative api, it would be difficult.

Fortunately, jOOQ does it automagically for you! Simply give it the sort keys and the value of the next row, and jooq’s seek() will construct the query for you

List<String> seekValues = Arrays.asList("2023-01-01", "4444")
dslContext.select().from(EMPLOYEES_TABLE)
    .orderBy(sortFields)
    .seek(seekValues)
    .limit(100)

jOOQ will automatically build the where clause you see above. Just make sure that your sortFields columns match up with your seekValues

So what should I use?

It depends. For simple queries, I highly recommend Jdbi. For more sophisticated queries, I recommend jOOQ. If you foresee needing jOOQ at all, maybe just go with jOOQ all the way

Tagged , ,

Script: delete git branch and switch back to master

I call this script git-done

current_branch=$(git rev-parse --abbrev-ref HEAD)

if [ $current_branch == "master" ]
then
  echo "You're already on master"
  exit 0
else
  echo "switching to master"
  git checkout master
  echo "deleting $current_branch..."
  git-delete-branch $current_branch
fi
Tagged

JSR-330: Framework-Agnostic Dependency Injection

In your project, you may have worked with Spring or you may have worked with Dagger. There are other popular dependency injection (DI) frameworks out there, and you select one for your project (I don’t think it’s encouraged to mix-and-match in the same project).

But if you’re writing a library for other projects to use, which DI framework should you standardize on? If you establish Spring as your DI framework, you discourage Dagger DI users, and vice-versa.

This is JSR-330 comes in.

JSR-330 is basically a small set of annotations to standardize DI. It does not implement DI; you’ll still need a DI framework to that. But it defines what classes are injectable/providable/bean-ifiable and where. This allows other users of your library to bring their own DI implementations with your project.

Here are the set of annotations that JSR-330 provides

JSR-330SpringDagger
@Inject@Autowired
@Named@Component@Component
@Qualifier@Qualifier
@Scope@Scope
@Singleton@Scope(“singleton”)@Singleton
@Scope(“prototype”)

As a small example, here’s a typical Spring DI setup

@org.springframework.stereotype.Component
public class Foo {
}

public class Bar {
@Autowire
public Bar(Foo foo) {
}
}

This is the same using JSR-330

@javax.inject.Named
public class Foo {
}

public class Bar {
@Inject
public Bar(Foo far) {
}
}

Note that the JSR-330 example will work in a Spring and Dagger project. However, the Spring example will only work in a Spring project.

There will be components that you will not be able to automatically inject. This may be a third-party class that you cannot just add a DI annotation to or some class that needs to be initialized in some custom way. For these, you’ll need to provide them via methods and unfortunately, you’ll need to rely on specific DI frameworks to do that.

Your library could then provide a package that helps others automatically provide those components.

For example, if Bar also needed a third-party DatabaseHandler, then you could provide a Spring configuration like this

@Configuration
public class DatabaseModule {
@Bean
public DatabaseHandler databaseHandler() {
return …
}
}

You may also provide a Dagger-friendly module for them to depend on as well

@Module
public class DatabaseModule {
@Provides
@Singleton
static DatabaseHandler provideDatabaseHandler() {
return …
}
}

Tagged , , ,

Grep cheatsheet

Find “some phrase” recursively from all files ending with .java

grep -r “some phrase” . –include=\*.java

Tagged

Redis-cli Cheatsheet

Connect to redis instance

redis-cli -h HOST -a PASSWORD

Subscribe to a PUB/SUB topic

PSUBSCRIBE myTopicId

Subscribe to all PUB/SUB topics

PSUBSCRIBE *

Tagged

Grafana Heatmap of Prometheus bucket metrics

Bucketed metrics are useful to see which bucket your numbers fall into. Latency is a common example of a bucketed metric, where you may have a bucket for < 10ms, another for < 50ms, etc. The code to create the metric (assuming opencensus) would look something like this.

String name = "latency";
MeasureLong measure = MeasureLong.create(name, description, unit);
BucketBoundaries buckets = BucketBoundaries.create(ImmutableList.of(10d, 50d, ...);
Aggregation distribution = Aggregation.Distribution.create(buckets);
io.opencensus.stats.Stats.getViewManager()
        .registerView(View.create(View.Name.create(name), description, measure, distribution, tagKeys));

Then to add a new value to the metric

Stats.recorder()
    .tag(tagKey, tagValue)
    .record(measure, 8372d);

This will generate metrics like this (probably at http://localhost:9090/metrics)

latency_bucket{le=0.001, tag=””} 1.0
latency_bucket{le=0.01, tag=””} 2.0
latency_bucket{le=0.1, tag=””} 4.0

Then you can generate a Heatmap of that data like this

To do this

  1. create a Heatmap in grafana
  2. Under Data source configuration
    1. Your promQL should look something like this:
      sum(rate(latency_bucket{tag=””}[5m])) by (le)
    2. Legend should be “{{le}}
    3. Make sure Format is Heatmap
  3. Under Axes configuration
    1. Set Data format to “Time series buckets” (VERY IMPORTANT!!!)

The above heatmap is over time. You can also create a snapshot of the distribution at the current moment like this

Same as above, but promQL would be

100 *sum(rate(latency_bucket{tag=””}[5m])) by (le)
/ ignoring (le) group_left
sum(rate(latency_bucket{le=”+Inf”}[5m]))

Tagged , , ,

Overriding GraphQL Java Kickstart

The GraphQL Java Kickstart library is a quick and easy way to get started with GraphQL. But it hides a lot of implementation details behind Spring beans making it not that intuitive to override with your custom needs. I don’t think it’s very well documented so here’s some info

The library contains a bunch of Auto Configurations, basically beans automatically injected to give you off-the-shelf functionality. Here’s a list of them.

We are using Spring Webflux so as an example, some of our Auto Configurations are in GraphQLSpringWebfluxAutoConfiguration. Notice it provides a GraphQLObjectMapper but also has the @ConditionalOnMissingBean annotation. So we could replace it with our own GraphQLObjectMapper.

Tagged ,

Insomnia environment variables

There was a time when I was Postman fan. I still am. But my new company embraces Insomnia. So you standardize so there’s less friction. Here’s a few simple setups to make life easier.

Set up environments and environment variables. I have an environment for local, staging and prod. At the very least, you can add a base_url as an environment variable so that when you switch environments, all your requests are automatically pointing to the right service.

Example environment variable

{
  "base_url": "http://localhost"
}

Referencing base_url in the URL line

I also found this helpful insomnia-plugin-default-headers plugin. It lets you set the same headers on every request inside a DEFAULT_HEADERS environment variable. For example

{
  "base_url": "http://localhost:8080",
  "DEFAULT_HEADERS": {
    "Authorization": "Bearer XXX"
  }
}

So when your authorization token changes, you can change it in one place for all your requests. Also, with different environments for local, staging and prod, you can keep the tokens separate

Tagged

GraphQL: variables within fields

Suppose you had the following graphql definition

type Query {
  company(request: CompanyRequest): Company
}

input CompanyRequest {
  companyId: ID!
}

type Company {
  name: String!
}

The request can be expressed as follows

## query
query CompanyById(
    $request: CompanyRequest!
) {
  company(request: $request) {
    name
  }
}

## variables
{
  "request": {
    "companyId": "310b6453-fc64-4408-977a-5f2cf8dabbcb"
  }
}

Now let’s suppose you add a new sub-type to Company. Like Employee

extend type Company {
  employees: [Employee!]!
}

type Employee {
  name: String!
  type: EmployeeType!
}

enum EmployeeType {
  FULL_TIME
  PART_TIME
  CONTRACT
}

Querying is simply adding the field in your request

## query
query CompanyById(
    $request: CompanyRequest!
) {
  company(request: $request) {
    name
    employee {
      name
      type
    }
  }
}

Now let’s add a filter to employee

extend type Company {
  employees(filter: EmployeeFilter): [Employee!]!
}

input EmployeeFilter {
  typeFilter: EmployeeType
}

This is how your request could change

## query
query CompanyById(
    $request: CompanyRequest!,
    $employeeFilter: EmployeeFilter
) {
  company(request: $request) {
    name
    employee(filter: $employeeFilter) {
      name
      type
    }
  }
}

## variables
{
  "request": {
    "companyId": "310b6453-fc64-4408-977a-5f2cf8dabbcb"
  },
  "employeeFilter": {
    typeFilter: FULL_TIME
  }
}

Working with ListenableFuture

References

Baeldung: Guava’s Futures and ListenableFuture

Since we’re working with this type of Future, we want to convert other types into ListenableFuture’s.

From CompletableFuture

import net.javacrumbs.futureconverter.java8guava.FutureConverter

CompletableFuture<String> myCompletableFuture = ...
ListenableFuture<String> myListenableFuture = 
    FutureConverter.toListenableFuture(myCompletableFuture);

To convert a result to a ListenableFuture
Similar to CompletableFuture.completedFuture("Hello World")

ListenableFuture<String> myFutureString =
    Futures.immediateFuture("hello world");

To convert a List of ListenableFuture’s to a ListenableFuture of List
(Similar to CompletableFuture.allOf(myListOfFutures.toArray(new CompletableFuture<?>[0])))

List<ListenableFuture<String>> myListOfFutures = ...
ListenableFuture<List<String>> myFutureOfList =
    Futures.allAsList(myListOfFutures);

To transform results of a ListenableFuture
(Similar to CompletableFuture.thenApply)

public Lion makeFierce(Cat cat) {...}

ListenableFuture<Cat> aCatFuture = ...
ListenableFuture<Lion> = Futures.transform(
    aCatFuture,
    (aCat) -> makeFierce(aCat),
    executor);

When the transform is another async process, then use .transformAsync()
(Similar to CompletableFuture.thenCompose)

public ListenableFuture<Cat> grow(Kitten kitten) {...}

ListenableFuture<Kitten> aKittenFuture = ...
ListenableFuture<Cat> = Futures.transformAsync(
    aKittenFuture,
    (aCat) -> grow(aCat),
    executor);

What if you have futures you had to run one after another? For example, you’re fetching from a service that returns pages of info, and you don’t know how many pages there are.

int max_pages_per_fetch = 10;
int page = 1;
ListenableFuture<List<Page>> bookpages =
    recursivelyFetchAllPages(page);

public ListenableFuture<List<Page>> recursivelyFetchAllPages(int page) {
  ListenableFuture<PageSet> pageSetFuture =
      bookService.fetchPage(page, max_pages_per_fetch);

  return Futures.transformAsync(
      pageSetFuture,
      pageSet -> {
        List<Page> mutablePages = new LinkedList(pageSet.getPages());
        
        if (pageSet.hasMorePages()) {
          ListenableFuture<List<Page>> remainingPagesFuture =
              recursivelyFetchAllPages(page+1);
          return Futures.transform(
              remainingPagesFuture,
              remainingPages -> {
                mutablePages.addAll(remainingPages);
                return mutablePages;
              },
              executor());
      },
      executor());
}

Waiting on several different Futures to complete and combining their result
(Similar to CompletableFuture.allOf)

ListenableFuture<Cat> catFuture = ...
ListenableFuture<Dog> dogFuture = ...
ListenableFuture<PetBasket> basketFuture =
    Futures.whenAllSucceed(catFuture, dogFuture)
        .call(() -> {
          Cat cat = Futures.getDone(catFuture);
          Dog dog = Futures.getDone(dogFuture);
          PetBasket basket = new PetBasket();
          basket.add(cat);
          basket.add(dog);
          return basket;
        }, executor);

Tagged